Thursday, August 25, 2016

SAFe: Setting up the Value Stream level

After various discussions about the alleged massive management overhead introduced by SAFe 4.0, let me clarify what's really brought in with the additional level called "Value Stream". The Value Stream Level combines multiple Agile Release Trains. As a matter of fact, you don't even want to go there unless you have significantly more than a hundred developers working on the same product. This level is only necessary in massively scaled product development, something you want to avoid in the first place. 
But when you can't - you need to find a way to deal with the problems introduced by an organization sized equally to multiple enterprises collaborating in (near) real time. And SAFe has a proposal how to get you started on that one, too.


Defining the value stream

What's a value stream? Simply put, it's all stuff happening "from customer (demand) to customer (satisfaction)". In some enterprises, that's obvious - while in others, it may be hard to grasp.

An example value stream
Let us take an example, "What is the value stream of a smartphone?" - That depends. When you are talking about a telco carrier, you as a customer sign a contract, get a SIM card and a device, register it - and start calling. You then get monthly invoices and that's it. From customer side.

But what is going on in the background:
To get a contract, you select a package typically considering of tariffs, prices, products, options and bundles that will be assigned to your customer account. All of this stuff handled in so-called "business support systems" (BSS). As customer, you don't care much how they do that, but BSS platforms are often provided by specialized organizations due to their complexity. It might even be fair to call this an independent product.  It may be adequate to label BSS platforms a "product" in it's own right, required not by you, the customer - but by the Telco carrier in order to serve their customers. Depending on the carrier, in this line alone you might find 500+ people working.

Next, of course, you want to make a call. But for that, your device must be activated in the telco network. That requires some interaction between the BSS and the network stations. For simplicity sake, let's just say that the physical network is yet another sub-product required to provide service for you, but ordered by the carrier.
There's also a product line called "Operations Support Systems" (OSS) taking care of that. There's major corporations doing only the Network base stations stuff, and there's major corporations doing only OSS stuff. The things going on here are highly technical and interest nobody except operators, but otherwise you couldn't make a simple phone call.

This means our example value stream actually consists of three product lines, only two of which are exposed to you as a customer. In each of these product lines, some magic happens so that you get to make your call.

So, here's what the value stream would look like:

A value stream perspective for a mobile network operator
As noted already, BSS, OSS and Network may be completely independently organized "technical value streams", for example when they are outsourced. SAFe would not advocate to start by insourcing all activities, especially where it does not make sense from a revenue perspective.

Continuing with our example, let's just assume we are dealing with a so-called "Virtual Network Operator" (MVNO) who does not have their own network. In this case, the "Network" and even the OSS would be a purchased service, provided as closed black box. Our own development would be using the output of these value streams, but would not be directly interacting with them in the process, so our SAFe organization would embed, but not directly touch them.

But we still have a problem: There's the BSS teams providing value to end customers by setting up new product lines and also those who provide value to our own business with stuff like accounting, tax records, audit reporting and yada (plus our black box technical value streams providing OSS and Network services for end customer value) - but they're too many to organize in a single Agile Release Train (ART). Now what?

Splitting up the value stream into multiple ARTs

An Agile Release Train can accomodate anywhere from 50-150 developers. Once we get beyond that, stuff like the Dunbar number and regular organizational complexity get into our way. So we need to keep the ART at a sensible size, while still being able to deliver useful products to our customers.

Here are some splitting strategies. Please note that while the terms "Bad", "Better", "Best" are definitely judgmental, there may still be pressing reasons to follow a specific approach.
A "bad" choice is still better than paralysis.

Bad: Component split

Probably the most obvious form of splitting is a technical component split, allowing developers to focus on a specific subset of technical systems. While that is possible, it's a great way of maximizing dependencies and coordination overhead while minimizing productivity and customer value. We don't want to go there.

Better: Feature category split

In our example, we might consider splitting the value stream around categories such as tariffs, campaigns and infrastructure. These kind of feature areas would be a good starting point to form a feature team organization that can deliver end-to-end customer value. Of course, there will still be dependencies - but far less than a component setup.

Best: Customer segment split

Probably the most common form of splitting may be "private customers", "business customers", "VIP" and "internal customers", having feature teams serve each customer segment independently. With this approach, strategic management can easily decive to boost/reduce the growth of a customer segment based on how many people would be working in the respective segment. Of course, there's also interaction between the segments, but with a robust product, these should never be game breakers.


Setting up multiple ARTs

So, after identifying how we want to split up our value stream, keeping in mind that each split should be between 50 and 150 developers in size, we'll end up with multile independent Agile Release Trains, together forming a Value Stream.

After reaching clarity which developer is assigned to which ART (just for clarity sake: every developer works on one ART, every agile team is part of one ART) there are multiple ART's to launch and coordinate.

Here is the proposed SAFe structure for setting up multiple ART within a single value stream:

The Value Stream Level - a team of ART's

This one should cerate a deja vu, as it looks exactly the same way an Agile Release Train is set up - and this similarity is intentional.

In another article, we will describe in more detail how the roles and responsibilities change in comparison to a single ART when this form of split occurs.

Summary

Coordination at Value Stream Level becomes an issue when more than 150 developers collaborate on the same product - and even then, the complexity of what you do depends highly on how your organization is set up. On Value Stream Level, you may have multiple ART's sliced in different setups, you may have black boxes of consumed services etc. 

Going into this level of complexity is only necessary for the largest product groups. SAFe provides a way for them to get started in a structured way even there are too many people to coordinate within a single ART.

Do not set up the added complexity Value Stream level coordination unless inevitable.

Disclaimer: Opinions expressed in this article are the author's own and do not necessarily coincide with those of Scaled Agile Inc.








Wednesday, August 24, 2016

SAFe: The structure of an Agile Release Train

I have heard many different views of what an Agile Release Train (ART) actually is, ranging from a predetermined release schedule all the way down to nothing other than a renamed line organization. None of these are appropriate. Let us clarify it's basic intention. As Dean Leffingwell puts it, an ART is no more or less than a "team of teams". But what does that look like?


One Team Scrum

Basically everyone is familiar with the constellation of a Scrum team, but for brevity's sake, let me include a small summary: Every Scrum team has a Product Owner, a Scrum Master - and the developers. This same constellation is more or less applicable for other agile teams - even if they don't actually use Scrum.

A Scrum tean


Multi Team Scrum

But since a Scrum team is limited to 3-9 developers, how does that look like when your organization is, say, 50, or 80, or 150 developers? Do you put the PO outside the team? Yes and no. The Scrum Master? Maybe, maybe not. How do developers interact?
In fact, Scrum does not answer any of these questions, as the scope of Scrum is a single team. Consequently, larger organizations adopting agility struggle to find their answers. Their Scrum organization sooner lor later looks like this:

A Multi-Team Scrum adoption

This model actually works, but it leaves some questions un-answered, such as: "How do we make sure we're all working on the most valuable stuff?" - "How do we make sure we're not impeding each other?", or, for business: "Whom do I talk to with my request?" - "Who could take this feature?" - "What's Priority 1?" - "Is there a way to get this out of the door earlier?" - "When will the next major release be ready?"

The need for coordination

While this may still be resolved for 3-4 teams, this scenario might become a nightmare for business when there are 10 or more teams: Transparency, the key to any decent agile development, is lost in the mud. The more focus development teams have, the more likely they will not work on the highest priority.

The first obvious level of coordination is: The Product Owners need to be aware of what other PO's are working on and what the overall Backlog looks like and where their own priorities are within the bigger system.

Typically, in large organizations, impediments are endemic to the overall organization. As such, even independent Scrum teams will all be struggling with the same or similar problems caused by the bigger system. Likewise, each team itself will be powerless to change the entire system.

As such, the second obvious level of coordination is: The Scrum Masters should be aware of what's going on in the other teams around them, and how their team affects other teams.

Another problem arising in this scenario is that teams may suggest or implement local optimizations which may be fine for their own team, but detrimental for the other teams! For example, think of one team deciding to switch to an exotic programming language like Whitespace, because they're all Whitespace geeks: How can other teams still work on that code?

As such, the third level of coordination is: The Developers should be aware of what's going on in the other teams around them, and how their team affects other teams and the product.

The SAFe approach

What SAFe® does here, is basically nothing more and nothing less than consider a "Scrum team" like a developer in a larger organization and create the same structure on a higher level:

An ART - a team of agile teams

Looks an awful lot like a Scrum team - and that's intentional.

New Roles

Before we get into "Whoa, three new roles - more overhead!", let us clarify a few things: First, huge organizations do require more overhead. Don't go huge unless inevitable. Second, while SAFe® suggests these roles, it does not mandate them to be full time roles. It's entirely possible that these are merely additional responsibilities of existing people. However, experience indicates that in huge organizations - these things tend to become full time jobs.


The Product Manager (PM)
The PM relieves each invidivual PO of aligning the overall big picture with the different stakeholders.
The big differences between a PM and a PO is basically that while the PO is working with teams and individual customers, the PM is working with PO's and the strategic organization. Their main responsibility is making sure that there is only one overall "Product Backlog" from which all the teams pull work - so that at any given time, there is only one Priority 1 in the entire organization.

The Release Train Engineer (RTE)
You could say that the RTE is a "super Scrum Master", but that's not quite the point. While their responsibility is definitely similar, they don't work as much with a team as they work with the organization and management: For the teams, we already have Scrum Masters. 
The RTE, on the other hand, paves the way to corporate agility. The main concern of the RTE will be the legacy structure around the teams, to create a positive learning and innovation environment to nurture the agile teams.

The System Architect (SA)
That's the only really new thing on the ART is the System Architect. To clear up the common misconception about agile architecture right from the start, their responsibility is not to draw funny diagrams of cloud castles and force the teams to implement them. Much rather, their role is to guide and coach architecture, so that we don't end up with uncontrollable wild growth. Likewise, when individual team members have questions about the architecture, this SA would be the first person to come to. 

Changes to existing roles

The Product Owner (PO)
A Product Owner may be in charge of more than one team. Practice indicates that 1-4 teams tend to work out, otherwise the risk of losing focus increases. 
At scale PO's tend to become specialized in areas of the product (such as: Product Catalog or Customer Services) and need to synchronize the overall big picture with each other. Most of all, they need to synchronize with the PM, who feeds back into corporate strategy.

The Scrum Master (SM)
A Scrum Master may also be working with more than one team. Practice indicates that 2 is already the limit.
Facing the team, the main difference for the SM is that they need to point the team to interact with people from different teams, rather than being a bubble for themselves. 
Facing the organization, the SM has to have a much deeper understanding of "Spheres of control", and communicate the impact of outbound impediments. They may need to hand over large blockers to the RTE, and may likewise receive input from the RTE when their team needs to budge in order to move a larger block out of the way.


Summary

I hope that this article explains how SAFe®'s structure of the ART is not "relabelling the same old thing", but simply putting Scrum on a bigger level.
To repeat again, "don't go into scaling unless inevitable". But when you need to, the ART model minimizes the deviation from good Scrum practices.




Friday, August 19, 2016

Be careful of so-called "agile coaches"!

An agile coach is supposed to help agility "stick" within an organization. But that is not always the case. Unfortunately, the label "agile coach" is not a protected trademark. Anyone can wear that title. As such, there is a huge risk that the so-called "coach" will do more harm than good. Caveat emptor!
Here are a few stories of what "agile coaching" I have experienced so that you can actually avoid it. As a disclaimer: I do not consider all agile coaches to be quacks. There are a few whom I highly respect. But there's a lot of quackery giving them a bad name - and not many talk about it.

Purposefully unhelpful 

Probably the most idiotic phrase in the arsenal of an "agile coach" is "You need to find this out by yourself". Of course, that is supposed to inspire self-learning. But honestly. Not everyone wants to learn everything by themselves. Here's my story:
I just came into a new enterprise as a consultant. I asked the team coach "What's the wifi password?" - "You need to find that out by yourself"
This guy was serious that I should rather learn the WiFi password by myself than have someone "tell" me. Dude. I can paint a picture of a stick-man, label it "agile coach" and it'll be more useful than such a coach. Why do people even hire coaches who can't even discriminate when self-learning makes sense?

One trick pony

They say that for a coach, moderation, conflict management, coaching and mediation are key skills. This has the unfortunate side effect that we see "agile coaches" popping up who are domain experts in these exact subjects - and nothing else! Meaning: They are sociology or psychology majors who have never written a single line of code and are now trying to teach developers how to work better. Here is my story:
I was working with a team that faced numerous difficulties. One of these was the lack of a coach. So, they hired one who was really good at talking and creating a positive mood: Actually, too good. Unfortunately, this person had only ever attended a 2-day Certified Scrum Master course and NEVER worked with a software development team.
They had zero knowledge of things like technical debt, Continuous Integration, software testing or other engineering practices - not even PO stuff like backlog management, value prioritization or right-sizing the work!
The team was going in full circles, continuously struggling to figure out stuff "everyone knows" and caught management attention eventually because of high defect rates and unusually low throughput. It was blamed on the developers. The team got disbanded and forcefully rearranged. The "coach" never realized anything was wrong - because hey, the team was always happy and learning!

Feigned expertise

How can you coach something you're actually clueless about? It seems that for some "agile coaches", agile experience is truly optional. They think that having a couple certifications qualifies you and give themselves a label of expertise they do not actually possess. Here is my story:
I am occasionally meeting with an "agile enterprise coach" (CSP) to discuss about the various problems they face. Based on their CV, they've got a decade of "agile experience". At first I was befuddled when they started asking me trivial questions about stuff like backlog prioritization or why people limit WIP. I realized that this person had never really worked in an agile way: They had no idea what the real purpose of Continuous Integration was, they had never even attended, much less moderated a Retrospective - and they haven't actually seen what a workable Product Backlog looks like!
Oddly enough, this person is seriously working in enterprise agile transformations, introducing Scrum to teams, even coaching/educating internal Scrum Masters and managers. Looking behind the scenes revealed that things could have been done within weeks that their clients are still struggling with after years.
Seems like the old conmen statement "There's a sucker born every minute" still holds true.

Hiding incompetence

A coach can always conveniently hide between "stimulating self-learning". I'd call it more fair to say "Some things I know. These I will help you with. Other things, we'll learn together". Especially in the latter category, I personally call it un-ethical to climb on a pedestal and profess to guide others' learning journey. But here's my story. I heard it over a cup of coffee with an upper manager:
A large product group tried to adopt Scrum for the development of an important product a good decade ago. Long story short, 500-people Scrum is not the same as one team. So, they had, "challenges". And since they couldn't figure out any way of getting past a specific one, they spent major bucks and flew in a highly reputable "Scrum coach" to make progress. For two hours the coach answered every question with a counter-question or reframed it. But the client felt there was no substance. Finally, the manager's collar popped and he bursted out: "Now tell me, ONLY with a Yes or No: Do you know how to solve this problem?" Pushed into the corner, the answer was "No". At which point, the manager exploded: "Then this meeting was 100% waste." Not only did they never try to approximate a solution or give helpful pointers, they simply left the client stuck with an unresolved problem. Even years after, to that manager - and their peers - "Scrum coaching" is associated with that specific name and has a very sour aftertaste. 
It should be fairly easy to state what your competencies are and what aren't. It's fair game to state that you don't know everything. But when others rely on your help, it's unfair to leave them hanging.
Note how "problem solving" is not mentioned by coaches as a coaching skill.


Getting away from the stuff that I would actually call fraudulent, where the client's ignorance of one's own incompetence is used to make a quick buck, let us now turn to the softer area of mindset.

Unable to see the big picture

Good coaches should be unbiased, because bias prevents us from seeing the big picture. Reflection and self-awareness help us to overcome bias to serve others better. Or: So is the theory. Some of the most biased people I have met in my life bear the title of "agile coach". Their bias is so incredible that they try to convince me of silver-bullet solutions that simply won't work in context. Here's my story:
I was once working with a company that had a HUGE quality issue: Their legacy product was a technical garbage heap: Developers literally had nightmares about the code base. Some threatened to quit were they forced to dig into that mess any deeper. Customers were rioting, Customer support was desparate. Customer problems (such as: lost orders, missing payments, wrong products shipped) never got fixed. I like to name things the way they are. When a customer spends money for A and then gets B, that's a DEFECT. A failure that the customer does not want. Period. So, I was fighting tooth and nails with management to limit WIP and value-prioritize defects so that we could actually drive down defect rates. The results were splendid: Customer Service actually started giving names to developers that were no longer synonymous with "monkey". Anyhow. Comes along this veteran "agile coach" who suggested "You shouldn't call them defects. Wherever I go, the first thing I do is to remove that label. This will cause an important mindset change!"
I spent over an hour mostly listening to why it's important for the team that the PO treats all the work equally. They didn't even account for the fact that "defect", in that case, was not merely a label but a metric to draw attention to the horrible technical mess, so that we could have sufficient power to weigh the need for a long-term technology change against the need for short-term business evolution (i.e. new features).
I did get their point, but I saw they didn't get mine. And they didn't care to.

Misunderstanding assumptions

As I stated elsewhere, people can and do make assumptions all the time. We navigate in what we perceive as "reality" by making and deriving assumptions. And some of them are inconsistent with each other or with evidence presented. As such, we should always be ready to anabdon our assumptions in favour of better ones. "Five Why" analysis can help us explore our assumptions. But some people just don't get it. Here's my story:
A team gathered for their retro. Within a few minutes, they simply decided "We need to write more unit tests". So, the team dug out their Five-Why tool: "Why?" - "Beacuse we have too many defects" - "Why?" - "Because we don't have enough unit tests." - "Why?" - "Because we didn't think they were that important." - "Why?" - "Because we didn't know." - "Why?" ... - "Dude, shut up your food hole!"
This team had already learned their lesson, but the coach made it look like there was more to it - down to the point where they really just got nauseating. "Five Why" is one of many ways to uncover false or misleading assumptions, but there's a point where it's fairly safe to simply let it go. A coach should not dig out all assumptions. They should be aware which assumptions are reasonable and which are unreasonable.

Wrong focus

Agile coaches might focus on the wrong things when they miss the big picture. Especially when their understanding is limited, they will quickly optimize in the wrong direction. Here's my story:
I was working with an organization where a certain middle manager always tried to impose their specific ideas (such as: separated test team, using HPQC rather than spock etc.) on teams. As i was doing my best to rally management support for the teams' ideas, I got into the line of fire from that manager. Basically, he was undermining the techical quality measures built by the teams with an email to ALL the PO's and coaches. So, I replied to ALL, because I wanted ALL to take a stand. What happened? This "agile coach" suggested introducing business email etiquette rules, because they felt bothered by a Reply-All on a matter they considered personal between me and the manager. So, we had etiquette rules enforced. Great! Problem solved! ... About half a year after I left, the manager won - now they have a Test Department reporting test results in HP-ALM. But hey, at least they have formal email etiquette rules!
It's actually quite funny how often agile coaches propose a solution without engaging in direct dialog with the concerned parties - and without trying to understand the problem they are solving. There was no mediation sesssion held to uncover why the conflict actually existed. The real problem never got solved. Neither any of the many coaches nor any PO bothered to understand the fundamental problem.



Conclusion

Am I perfect and pointing fingers elsewhere? No. I undoubtedly have some communication issues and maybe many of the situations I encountered would have turned out different if I had known how to better communicate. But I learn.
However, I would also expect "agile coaches" to bring honour to their profession.

When the solution isn't known, approximate. But be straight about it. Never claim to help others with things you don't understand: That's fraud.

Especially from a coach, I would expect the following: Be fast to learn, but slow to judge. Engage in dialog. Never decide before verifying your own assumptions. Be ready to discard your preformed assumptions. Don't draw biased conclusions. Let people know when you don't know.

It's called PDCA for a reason: Never act before checking. And, from a coach, I'd expect that to be a double check.

Don't play games: a coach is not a mad scientist!


Final disclaimer: I do not consider all agile coaches to be quacks. There are a few whom I highly respect.




Wednesday, August 17, 2016

Agile learning for starters

I have previously discussed the "cost of learning" and it's impact on the learning strategy. After establishing that we should always keep this cost of learning below the Point of No Return, let us consider the differences in learning. The dogmatic statement "A coach should not prescribe a solution, but foster self-learning", presumes that self-learning is universally the best approach. But is it?

Let us consider which companies/teams typically call for help, based on this simple model:

Do you know why you don't know what you don't know?

There's a hidden relationship to the Cynefin Framework hidden: Software development is a socio-technological problem, and the issues of communication, understanding and skill are just a few factors affecting the team's performance. We work in the complex domain, where any model has an inherent error.
Usually, when a company requests external help, they tend to be basically aware that they don't really know what their problem is and that they assume someone else can help them make progress. In terms of our model, uncertainty is high and people admit that their specific knowledge and understanding of the problem domain is shallow. That's good. It's a basis for learning.

Initiaing problem solving

We have a wicked problem here: How do we know we're doing the right thing - and how do we know we're getting better at it?
A consultant has no choice other than first gaining clarity whether the problem is comparable to a problem where a solution is known, so would first try to drive down uncertainty - by asking questions and experimenting with the process.

If the problem is in a domain where deep expertise is available, the problem solving process is reduced to tapping into available expertise.

If the attempt to reduce the problem to a domain where a solution is known fails, this indicates that we're working in the domain of the Unknown.
This one splits down again:Either, we know that all known solutions fail, in which case we need to innovate - or all attempts to reduce uncertainty failed, which indicates our problem is ill-defined and we need to clarify the problem until we have a workable problem.

Innovative problem solving

If there is need to innovate, we're pretty much clear that we'll be using empirical data, feedback loops, inspect+adapt and experimentation to iteratively anneal the situation. The best thing a consultant can do in this situation is to provide support based on their own experience to discern which experiments make sense and what the available data implies.

There are tons of techniques for innovative problem solving, starting with Kaizen Events, Design Thinking, TRIZ ... potentially even a full blown Design For Six Sigma (not advised). Determining the suitable problem solving technique may also at the discretion of the consultant.

Introducing known solutions

When expertise is available, the consultant must factor in the impact and urgency of getting the problem solved.
Impact is high when there is a risk of crossing the Point of No Return, i.e. destroying the company / team, or have individuals lose life, health or their job.
Urgency is high when only one shot is possible.

  • If both impact and urgency are high, a dogmatic solution will save time at the expense that the inherent understanding remains low. Autonomous learning is purposefully replaced with prescriptive teaching for a greater good.
  • If impact is high, yet urgency is low, the consultant may choose to underline the solution process with moments of learning to deepen understanding. This will reduce long-term dependency and the risk for misunderstood assumptions around the solution.
  • If the impact is low, yet there is a sense of urgency, the consultant might actually provoke "learning from failure" to create deep understanding for the next time.
  • If both impact and urgency are low, the consultant should not invest further time. Providing a pointer on how the team could learn solving the problem can be sufficient. If they learn - good. Otherwise - no harm done.

Summary

In this article, we described only the consultant's approach when the team is lacking knowledge and ability that is available to the consultant. A different approach is required the team's knowledge exceeds that of the consultant.


A good consultant weighs the costs of learning against the benefits of learning and chooses the optimal approach, carefully considering the tradeoff between short-term results and long-term results.
Innovative problem solving should generally not be used for known solutions, since that approach is inherently inefficient. Although it facilitates learning, it also maximizes the cost of learning.

Coaches who dogmatically insist on facilitating innovative problem to maximize learning solving might force the team to reinvent the wheel when a firetruck is sorely needed. That's not helpful. It's snake oil.
There are times for learning and times for just doing. Know the difference.

Wednesday, August 10, 2016

Coach, Trainer or Consultant - a false dichotomy

There are a lot of opinions going around on the Internet concerning "coaching vs. consulting". Especially coaches who like to discern their position will pose this question and suggest that consulting is somehow inferior to coaching. Let's leave the emotional aspect out of this and reduce this to assumptions. In this article, I will include "training" as well, because of the significant overlap. 


The model

For all three services (coaching, consulting, training), there's two sides involved: Service provider and service taker (client). Both make assumptions about themselves and about the corresponding other party. For simplification purposes, we will not list all assumptions, but focus on the essentials that are related to learning.

Underlying assumptions for each role

Interpretation of the model

The first thing we should be clear about is that these are all just assumptions.
Since they are assumptions, it's good to clarify that both client and service provider have the same understanding of these assumptions beforehand, since they define expectations.
These assumptions are not axioms, since each variable can be verified objectively by asking questions and observation. Neither client nor service provider should turn any of these assumptions into dogma and insist they be true regardless of reality. You must accept that any of them may turn out invalid at any time.

Provider responsibility

For each of the three roles, the service provider is expected to have a clearer understanding of the big picture than the client. As such, regardless of whether you are coach, trainer or consultant - you need to be actively on the lookout whether the above assumptions are valid. One skill you need to bring to the table is the ability to realize when they are invalid, because it breaks the model of your role. When they are invalid, you must take steps beyond dogmatic insistence on the definition of your role in order to move into the direction of success.

Client responsibility

The main reason for getting help is that you don't really know what you may need to know. Your initial assumptions may be invalid because of what you did not know. Given better information, you need to adjust your course of action accordingly.


Application

Roles are really just transient. 

It's probably easiest for the trainer who provides a specific training service to stick to the agenda and simply leave. Worst case, the training did not help and the very limited few days of training are wasted. However, even trainers often add coaching techniques and modules to their trainings where they actively generate learning with their clients. In rare cases, that may turn into consulting sessions. When a trainer leaves, the client should have an appetite to try out the training knowledge and learn more.

The line is significantly more blurry for coaches and consultants.
The best thing a consultant can do is enable the client to solve the specific problem and related problems individually by producing learning within the organization. This may include domain-specific trainings in skills the consultant provides and coaching key players in doing the right thing. When a consultant walks out, the client should be able to say "From here, we can move forward by ourselves." - which is the best thing a coach would also hope to achieve.

For a coach, the main difference to a consultant is that there is no specifically defined problem initially and that the coach is not expected to come up with a solution. However, a good coach should understand that there are situations where simply giving a directed pointer to existing solution in order to instill an appetite for learning and experimentation is a good way forward. That's a training situation. Also, sometimes, the coach needs to take carefully considered shortcuts in the learning process to prevent irrecoverable damage: That's consulting. Depending on where on the learning curve the team is, that can be quite a big part of the job.


Summary

Assume that coaching and consulting are two distinct roles and that you are either-or is a false dichotomy. In the same breath, it's an even worse misinterpretation to consider one of them"superior" to the other, because both simply rely on different assumptions. A good consultant will use significant coaching techniques in context, and a good coach will use significant consultative techniques in context as well. "Context" depends on observation and interpretation and is usually very mixed. Be ready to accept this mix. Your actions then also need to reflect this mix.

Being dogmatic on one specific role and insisting on the above assumptions as axioms is done only by people who are unable or unwilling to consider the systemic implications of their own actions. That's snake oil. Caveat emptor.