Sunday, March 12, 2017

Make People Awesome? Give Them Superpowers!





We need to explain our primary statement of benevolence, expressed as "make people awesome." This is intended to express that have an explicit goal of benefitting specific others with all of our work.

I have had so many apologetic conversations about the term, and it's been described in several articles (some well, some rather poorly).

The message is singularly hard to express, at least in a form that fits on the sticker.

Admittedly, it's 2017. Everyone is on high alert, and words trigger people in dozens of interesting ways.

To date, the primary triggers are:
  1. "make people" - which tends to be heard as "coerce, demand, or force"
  2. "awesome" - which tends to be heard as "valley girl talk", indicating the speaker is bubble-headed or shallow.  

In any longer discussion, we eventually get around to Kathy Sierra's book.  "Badass: Making Users Awesome"  which was one of Josh's inspirations.  There were a few tailorings going on, but the idea was here.

We don't say "badass" because that, by itself, is sometimes used to describe someone who is forceful and violent.  I'd be surprised if Kathy S. has not had many clarifying discussions over that misinterpretation.

We also don't say "users" in our little slogan. We believe in an overarching benevolence.  We want to make users "awesome" of course, but we also want that benevolence to extend to our teammates, managers, support people, DevOps, QA, sales. We want it to extend to our customers' customers.

Some people take "make people awesome" to be a demand that managers of development teams behave a certain way toward their teams. We're not excluding that message, but suggestions that we change it to something like "get out of your team's way" restricts the message to micromanagers only, and not benevolence to all our community.

Sadly, some people take the whole statement to mean "demand that other people behave in a way that you see as awesome" -- very far from what we intend. We would have said "demand awesomeness from others" if we meant that.

Likewise "be kind" doesn't cover it.  We are not trying to "just be nice" but rather to give the people we deal with an extra measure of capability, awareness, competence, and power. We want them to be, properly "awe-inspiring" to the people they work and live around.

And now I've tripped over "awesome" at least twice.

Let me explain.  I found an old article I wrote describing software as super-powers.

The idea is that there are systems that help doctors avoid drug interactions while they are in the act of prescribing drugs. This has had a huge impact on the world. Patients have fewer complications, doctors have fewer lawsuits, hospitals have less workload due to drug interactions. Compared to pre-software-assist, doctors are impressively aware of interactions and up-to-date with new alerts because of software. That software is "making them awesome" in some regards.

The idea of making people awesome is just that - always be looking for ways to multiply other people's ability to be successful in their endeavors.

It doesn't matter if they're coding, testing, managing, performing medical services, nursing, hosting parties, supporting software systems, installing cable TV, cooking dinner, or editing podcasts. We can always be looking to "make people awesome" at the things they do for others.

I don't know that we'll always hold to the current phrasing. I would be okay with finding another way to say this that also fits on stickers and is easily memorable. In most ways, the current phrasing is fine, if only there weren't so many triggers to trip.

The Lightweight Tweetstream



Once we had "lightweight methods" as a frequent topic of discussion. It's still the movement I pursue.
Some innovators came through and invented radically different ways of working, usually through collaboration and teamwork.
The idea of simplifying the workflow was met with much enthusiasm in some quarters and surprisingly hot disdain and outraged anger in others. Still, those practicing lightweight methods produced software quite well, so lightweight methods persevered. 
Lightweight methods took the name "agile", which is a perfectly good name.  Even so, I'll not use it much here. 
Lightweight methods pursue "the least process you can afford" at all times. 
  • If you can afford less documentation, then cut some of the documentation out.
  • If you can afford fewer queues and piles, then streamline your process. 
  • If you can afford a cheaper alternative to approval cycles or big plans up front, by all means, use the faster cheaper alternatives.
  • Can you replace an expensive, accurate method with one that is cheaper and less accurate, but the result is "close enough"? Then replace the expensive method.
  • Are there ceremonies and meetings you can cut out? Then why keep having them?
  • If your process quits working, then you may have less than you can afford, so you need to raise it up a notch or try simplifying a different part of the process.
Cool analyst techniques emerged. Techniques like story mapping and example mapping make it possible to release a feature in end-to-end "slices" so that the system always works, always provides value to some users (even if the whole feature-as-specified isn't fully written yet).
Lightweight methods sometimes find expression in safer practices, so fewer preventative governing processes are needed. 
  • If you integrate code all the time, then you don't have big ugly merges. 
  • If you work in groups, you don't need as many code/test reviews.
  • If your business/customer person is available, they can simply answer questions instead of writing explanatory documents. 
  • If you micro-test continuously, you can avoid a lot of bug fixing later.
  • If you automate boring, repetitive tests, then you can test more often and faster.
  • If all the dull testing is automated, then humans can concentrate on the interesting testing tasks.
  • If you keep code clean, you can make changes quickly without relying on the original author to explain and edit it for you.
  • If you have smaller batches, you can release more often.
  • If you test and release more often, releasing becomes a non-issue.
  • If you build through iterative enhancement, then you can choose to stop elaborating a feature once the remaining bits aren't very important.
  • If you collaborate all the time, you will need few meetings and documents (cf Mob Programming).
And of course, the biggest thing that all the lightweight methods seemed to do was (as Chet Hendrickson said) to "get the software working very early in the project and keep it running throughout."
Continuous Delivery and Continuous Deployment came along, enabled by new technologies. These help to further drive software development methods to become even more lightweight.
Sadly agile methods, in an attempt to scale up to very large organizations and automated management tools, quickly became heavyweight. Or, rather, a lot of heavyweight implementations and processes starting calling themselves "agile."  Some of the diagrams and methods require complicated diagrams and many roles and rules to operate, and are easily on par with the heavyweight methods that Agile attempted to replace.  
A series of "Nos" emerged.  NoEstimates,  NoProjects. No silos.

Likewise, limits arise. WIP = 1, Limited Red Society.

This is a familiar cycle. There is bloat, then reform, then bloat, then reform.
Rules tend to accrete. Someone has a misalignment - we make rules to prevent or recognize misalignments. People make mistakes, and we make rules to rule out the mistakes. People miscommunicate, so we build rules and documentation to correct their communication. People fail, and we add process and rules to force them to succeed when we want them to.

Over time, the basic rules of large organizations make rules and standardization and conformance look appealing, and we need sheriffs to make sure the rules are followed and auditors to check that the sheriffs are doing their jobs, and people-watchers to watch the people-watchers who watch the people who ensure that the rules are followed.

Me, I back the plucky group of rebels who try to downsize the process. I especially back the ones who replace rules and processes with human dynamics, human alignment, curiosity, concern for others, and healthy doses of modern automation to drive the dullness out of their daily work.

 I particularly like replacing recipes with values, curiosity, trust, alignment, understanding.

Heavyweight processes run on permission, rules, restraints, limits, conformance measures, numerical goals.

Lightweight methods run on agreements and experiments, enabled through trust and alignment.  This will always be the movement I pursue.

Friday, February 10, 2017

The Dev Goal

As developers, we want to produce results and we want to produce them fast.

Normally, we work in the context of a team, where we all collectively want to be fast and stay fast, and produce results that work.

In order to become fast, we have to learn the tools and techniques which allow us to practice fastness and we need to measure whether a technique or tool actually lets us be sustainably fast as a group.

That's hard.

But it's important.

Today I was thinking about all the work we do and the practices we use.

In hopes that it will be useful to stir up conversation and debate, I have tried to sum up our practices in a small set of imperatives:


  • Write useful code only. 
  • Don't write defects.
  • Don't write code that invites others to create defects.
  • Don't write code which hides or obfuscates defects (yours or other people's). 
  • Don't cultivate habits which may result in code which contains, hides, or obfuscates defects.
  • Don't take your advantage at another programmer's expense: don't make messes for teammates to clean up, or take a personal speed-up with slows down the team, or take chances that may result in other departments working harder as a result of your "shortcut". 
And there you have it. 

Your comments are precious, including disagreements. 

Monday, January 16, 2017

Getting stuff done.

Here is a week in the life of a technical coach.

I started the week by flying. I'm about an hour's drive from the airport, and this particular flight was only a couple of hours. When I land, I have about a half-hour to forty-five minutes standing outside in a taxi line, then an hour's drive to my hotel. The hotel is wonderful. I have fish-n-chips in the hotel restaurant and check in for the night.

The next day work starts.

I was working with a team (which remains anonymous). We had a quick talk, then picked up some work to do together. We agreed to try mob programming all day, with punctuated bits of explanation along the way.  I asked that we do real work all week.

However, I know it's threatening to pick someone's work that was done in private so far, and then put it on the board in front of everyone and spot code smells and issues. It just seems unfair. As a result we decided to do some real work that nobody had been working on yet.  I suggested that it could be in the existing code base so that we can work on "legacy" skills, but also if it were fresh code that would be fine.

The PO had a service that he thought would be very useful in his company, and since none of us had invested in the code already we agreed to do that work.

We began by establishing safety -- picking a stack, setting up an environment, establishing version control, installing test libraries -- so that we could start on the right foot.  The team picked a language that I was largely unfamiliar with (which is fine) and which most of them were only lightly familiar with (which is fine).

There were sets of features discussed. This was the "three amigos" meeting but done with a whole team instead of just a few people. We all pretty well knew what we were going to be doing.  The features were all too big to be cranking out several completed ones per day, so we took a little slice of a basic and essential feature and started.

We learned about the feature, the language, and the testing environment on the fly, and pretty soon had some BDD tests automated. Pretty soon we'd gotten the first test to pass, committed it, and were on to the next. We did several scenarios in the first day, refactoring and integrating as we went, learning how to write tests and code, relying on "doing one thing at a time" and constantly practicing "pickyism" on code and tests.

On the second day people were saying that this was pretty good, but it wasn't "real" code. Of course, I and the PO intended that it was very much the real code, in embryonic form. We listed the reasons it wasn't "real" and on the second day this list drove our prioritization. We did the most important part of making it real, then when something else was more important we switched.  There was some good discussion, and by the end of the day it was many check-ins further along and working as a proper server.

On the third day, besides doing demos, we completed the pipeline so that we were about 90% of the way to Continual Deployment. This involved a lot more waiting, so we used "downtime" to learn how to do things that we needed to do next. We were joined by people in the org who had heard good things about the real progress we were making.

Fourth day we picked up some "legacy" code (by the Michael Feathers definition) and spent the day cleaning, renaming, and refactoring so that we could easily add the next feature. This was another language that I was lightly familiar with, but had used once before for a couple of day several years ago.

We followed the Kent Beck rule:
When faced with making a change, first make the change easy (warning: this may be hard) and then make the easy change.
By the end of the day, the new change was relatively easy, and the code where the change must be implemented was all nicely "under test." Several new techniques were all pretty well-known by the team, and we pushed new code.

Also, on day 4 we found out that the work we did on the first three days had an internal customer already. We were close enough that only a couple of small changes would be needed to satisfy this internal customer -- even though we'd only completed less than half of the "why it's not real" list and did not complete the product backlog. It just turns out, as it so often does, that a very minimal slice of a product can provide value early in the development cycle. You almost never have to have all of the "minimal" features in order to make the code useful.

Friday? An hour to the airport, an hour in the waiting area, an hour plus on the tarmac, a couple hours in the air, an hour back home, and then logistics for my next trips and answering emails.

So, basically, I had travel and project work and then more travel. It's pretty simple. The hardest parts are mostly learning, but in software learning and thinking are 11/12ths of the work anyway.

Tuesday, December 20, 2016

Implicit Time-Based Coupling - Inside a Class.

Imagine if you would a class in an object-oriented language.


Notice that x()and y()are not constructors.

Now, a "good" user of an instance of Trouble will do something like this:
t.x(); // always call x() before y()  
t.y(); 
Whereas a naive user of the class might do something more like this:
t.y();

In this case the value of i1 (used by function y) is either undefined or possibly some leftover value from earlier uses of instance t.

How Do You Know?


The functions x and y have an implicit temporal binding.  When you look at the object diagram, or if you use code completion, nothing you see will tell you that you must call x before y.

Either:

  1. You don't know about it, and have been "just lucky" so far
  2. You don't know, and are currently making bugs you don't know about
  3. You know because you've read the code from top to bottom and understand the implicit temporal coupling. 
  4. You copied someone else's example after yours didn't work, and you don't know why it calls x() before y(), but dammit it works.
  5. You've spent considerable time in trial-and-error and treat x() as a magic incantation.
  6. Someone told you.
Of these, I'm most afraid of 1 and 2. These are hidden dangers, and eventually there is going to be a nasty surprise for someone -- possibly the customers.

If one is working at a large scale (dozens or hundreds of people using Trouble) then 3 is asking far too much. Who can afford to carefully read the implementations of all the classes they use, and keep track of when variables are set and used? You can't afford to have 20 people wasting time on this, in hopes that they might spot the one or two special classes who rely on temporal coupling. 

While it's easy to blame all "bad code" on a lack of discipline, we find that it is more productive to fix the tricky coding practices that require a high level of discipline. 

You will always find it more scalable and productive to make the work easier than to work harder

Don't work people harder. 
Make people's work easier!

Number 4 is also a nightmare at scale. Code duplication is the king of code smells, but here people are copying code because they feel like they can't afford the time to understand the code they're producing. That's awful. I don't blame the copiers, they're in a situation where they're asked to put out effort they can't afford. But I would rather they investigated instead of copying blindly. 

Number 5 is too much to ask.

Number 6 (someone told you) is not bad in and of itself. Communication is good. But here the communication is how to work around a problem in the code so that you don't have to fix it. Perfectly reasonable-sounding if you can't change the code itself, but questionable in general. How safe are you if the only thing protecting you from disaster (in the hazards of 1 & 2, or the time sinks of 4 & 5) is oral tradition. 

At Scale?

While solutions often scale poorly, we find that problems always scale very well, indeed.

The ideal when programming in groups and at scale is that you demand the least from each other, so that you can all accomplish the most.



At scale, you want to build code so it is easy for other people to do what works well, and avoid hazards and risks along the way. 

All implicit couplings are risky, including (but not limited to) duplication of algorithms. Every fact, every algorithm, every bit of knowledge should have a Single Point of Truth in the application. 

Here the design of the code is such that the "point of truth" (knowledge that x() must be called before y()) is distributed to every bit of code which needs to call function y.  

It demands duplication of code in the callers, and demands a higher level of "due diligence" from the programmers. 

At scale, this small feature becomes a big problem. 

So now what?

There usually ways to fix this problem.  There are also ways to cope with not solving it. 
  • Make y call x.
  • Make incorporate y into x, if y really only means "complete work started in x."
  • Have y set i1 directly.
  • Initialize i1 to a null or indicator value in the constructor, and add a guard clause in y which will throw an exception if i1 has not been given a meaningful value.
  • Create a new function that calls x() and then y() -- and rename function y to something that sounds dangerous to use, like unsafeY or  innerYtoBeUsedOnlyIfI1HasBeenSet.
  • Rename x to prepareForY.
  • If only x and y use i1, then it is a temporary field (code smell) and you should refactor it away normally.
  • Add an optional parameter to y, which is the value you want i1 to have. Combine this strategy with the guard clause idea, above.
  • Build a wrapper class around the Trouble class, so that the wrapper is the Single Point Of Truth about how the Trouble class should be handled. Make the Trouble class private, internal, or otherwise hidden. 
  • Go find or make a replacement class for Trouble.  Who needs it? 
  • Tolerate Trouble by setting a cron job to search for invocations of y. Review any new uses of the function immediately.

You can probably come up with a dozen more ways to improve the situation.

But the important thing is that we can recognize implicit temporal cohesion and its role in making code expensive to write, and that we take steps to make the work easier if we want to accomplish more work sooner. 

And who doesn't?

Wednesday, December 14, 2016

The Employee's Unapproved Feature

Your employee “wastes” time doing a feature you didn’t approve.

You find out about the feature as it is being released to customers.

Customers are enthusiastically happy with the new feature. Thrilled, in fact. They complement you and your team!

Do you

  • reign in your rogue employee 
  • give her more influence in deciding future features
  • ignore this one infraction since it worked out okay
Quickly write down your answer and then read the next paragraph.



...


Okay, you've written down your answer. It doesn't matter so much to me what answer you chose, or if you chose one not given above. What I want to know is what were the principles on which you based your answer.

  • How important was the outcome v. the process? 
  • How crucial is conformance and predictability of action v. success of action and engagement? 
  • Was the employee's act one of insubordination or service to customers? 
  • Which is more important? 

If you complete this little meditation and change your answer, what did you change it from and what to? And why? 

I'd love your answers here, or if you prefer to be anonymous at my sayat site.

Thursday, December 1, 2016

TDD: Start With A Failing Test

The question was asked:
In Test-Driven Development, what does it mean to start with a failing test? 

This is not a complicated question, so let me give the short answer:

  • Write a test that can't possibly succeed because you have not yet implemented the feature; but which would succeed if the part of the feature it's testing were written.
  • You want it to be a good test: clear, obvious, simple, discrete.
  • You want it to fail, so you see what the error message will look like -- whether it will provide enough information when someday it fails unexpectedly.
  • Then you write enough feature that the test passes works (but not the whole feature).

The idea is like a video game. You write a test, which is your first challenge. Then you beat that challenge and save your game (to version control) so you can come back. You layer on the challenges until you've beat the game (written the feature).

There is more to the TDD cycle, but this is enough to answer the one question.

BTW, the same "accumulation of phases that work" is the preferred approach to writing stories, which add to features... the whole world of test-driven is about thin, tested, integrated slices continually being built and integrated.