0 Comments

As I’ve mentioned a bunch of times, I tutor an Agile Project Management course at the Queensland University of Technology. Its been useful to me on a number of fronts, from making me think more about what the concept of being agile actually means to me, to simply giving me more experience speaking in front of large groups of people. On the other side of the equation, I hope its been equally useful to the students.

A secondary benefit of tutoring is that it exposes me to new concepts and activities that I’ve never really seen before. I’d never heard of Scrum City until we did it with the students back in the first semester, and the topic of todays blog post is a similar sort of thing.

Lean Coffee.

An Unpleasant Drink

Fortunately, Lean Coffee has absolutely nothing to do with coffee, well not anymore anyway.

Apparently the term was originally coined as a result of a desire to not have to organise speakers or deal with the logistics of organising a venue for a regular meeting. The participants simply met at a particular coffee shop and started engaging in the process which would eventually become known as Lean Coffee (one because its lightweight and two because it was at a coffee shop).

At a high level, I would describe Lean Coffee as a democratically driven discussion, used to facilitate conversation around a theme while maintaining interest and engagement.

Its the sort of idea that aims to deal with the problem of mind-numbing meetings that attempt to deal with important subjects, but fail miserably because of the way they are run.

Who Needs a Boost Anyway?

It all starts with the selection of an overarching theme. This can be anything at all, but obviously it should be something that actually needs discussing and that the group engaging in the discussion has some stake in.

In the case of the tutorial, the theme was an upcoming piece of assessment (an Agile Inception Deck for a project of their choosing).

Each individual is then responsible for coming up with any number of topics or questions that fit into the theme. Each topic should be clear enough to be understood easily, should have some merit when applied to the greater group and should be noted down clearly on a post-it or equivalent.

This will take about 10 minutes, and ideally should happen in relative silence (as the topics are developed by the individual, and do not need additional discussion, at least not yet).

At the end, all topics should be affixed to the first column in a basic 3 column workflow board (To Discuss, Discussingand Discussed.

Don’t worry too much about the relevance of each topic, as the next stage will sort that out. Remember, you are just the facilitator, the actual discussion is owned by the group of people who are doing the discussing.

I’m High on Life

Spend a few minutes going through the topics, reading them out and getting clarifications as necessary.

Now, get the group to vote on the topics. Each person gets 3 votes, and they can apply them in any way they see fit (multiple votes to one topic, spread them out, only use some of their votes, it doesn’t matter). If you have a large number of people, a simple line is good enough to avoid the crush during voting, but it will take some time to get through everyone. Depending on how big your wall of topics is, its best to get more than 1 person voting at a time, and limit the amount of time each person has to vote to less than 30 seconds.

Conceptually, voting allows the topics that concern the greatest number of people to rise to the top, allowing you to prioritize them ahead of the topics that concern fewer people. This is the democratic part of the process and allows for some real engagement in the discussion by the participants, because they’ve had some input into what they think is the most important things to talk about.

That last point was particularly relevant for my tutorial. For some reason, when given the opportunity, the students were reticent to ask questions about the assessment. I did have a few, but not nearly as many as I expected. This activity generated something like 20 topics though, of which around 15 were useful to the group as a whole, and really helped them to get a better handle on how to do well.

That’s What They Call Cocaine Now

After the voting is finished, rearrange the board to be organised by the number of votes (i.e. priority) and then its time to start the actual discussions.

Pick off the top topic, read it out and make sure everyone has a common understanding of what needs to be discussed. If a topic is not clear by this point (and it should be, because in order to vote you need to the topic to be understandable) you may have to get the creator of the topic to speak up. Once everything is ready, start a timer for 5 minutes and then let the discussion begin. After the time runs out, try to summarise the discussion (and note down actions or other results as necessary). If there is more discussion to be had, start another timer for 2 minutes, and then let that play out.

Once the second timer runs out, regardless of whether everything is perfectly sorted out, move on to the next topic. Rinse and repeat until you run out of time (or topics obviously).

In my case, the topics being discussed were mostly one sided (i.e. me answering questions and offering clarifications about the piece of assessment), but running this activity in a normal business situation where no-one has all the answers should allow everyone to take part equally.

Conclusion

I found the concept of Lean Coffee to be extremely effective in facilitating a discussion while also maintaining a high level of engagement. It has been a long time since I’ve really felt like a group of people were interested in discussing a topic like they were when this process was used to facilitate the conversation.

This interests me at a fundamental level, because I’d actually tried to engage the students on the theme at an earlier occasion, thinking they would have a lot of questions about the assessment item. At that time I used the simplest approach, which was to canvas the group for questions and topics one at a time. I did have a few bites, but nowhere near the level of participation that I did with Lean Coffee.

The name is still stupid though.

0 Comments

As a result of being a tutor at QUT (Agile Project Management, IAB304 (undergraduate), IFN700 (post-graduate)), I get some perks. Nothing as useful as the ability to teleport at will or free candy, but I do occasionally get the opportunity to do things I would not normally do. Most recently, they offered me a place in a course to increase my teaching effectiveness. Honestly, its a pretty good idea, because better teachers = happier, smarter students, which I’m sure ends up in QUT getting more money at some point. Anyway, the course is called “Foundations of Learning and Teaching” and its been run sparsely over the last month or two (one Monday for 3 hours here, another Monday there, etc).

As you would expect from a University course, there is a piece of assessment.

I’m going to use this blog post to kill two problems with one idea, mostly because killing birds seems unnecessary (and I have bad aim with rocks, so it would be more like “breaking my own windows and then paying for them”). It will function as a mechanism and record for completing the assessment and as my weekly blog post. Efficient.

Anyway, the assessment was to come up with some sort of idea to provide support to students/increase engagement/building a learning community, within my teaching context (so my tutorial).

I’ve cheated at least a little bit here, because technically I was already doing something to increase engagement, but I’ll be damned ifI’m not going to use it just because I thought of it before doing the course.

We do retrospectives at the end of each workshop.

Mirror Magic

If you’ve ever had anything to do with Agile (Scrum or any other framework), you will likely be very familiar with the concept of retrospectives. Part of being agile is to make time for continual improvement,so that you’re getting at least a little bit better all the time. One of the standard mechanisms for doing this is to put aside some time at the end of every sprint/iteration to think about how everything went and what could be improved.

I’ve been practicing agile concepts for a while now, so the concept is pretty ingrained into most things I do, but I still find it very useful for capping off any major effort and helping to focus in on ways to get better at whatever you want.

In the context of the workshops at QUT, I treat each workshop as a “sprint”. They start with a short planning session, sometimes feature the grooming of future workshop content in the middle and always end with a retrospective.

While I think the whole picture (running the workshops as if they were sprints) is useful, I’m just going to zero in on the retrospective part, specifically as a mechanism for both increasing engagement and for building a community that treats self-improvement as a normal part of working.

The real meat of the idea is to encourage critical thinking beyond the course material. Each workshop is always filled with all sorts of intellectual activity, but none of it is focused around the process of learning itself. By adding a piece of dedicated time to the end of every workshop, and facilitating the analysis of the process that we just shared as a group, the context is switched from one focused purely on learning new concepts, to how the learning process itself went.

Are Reflections Backwards…or Are We?

But what exactly is a retrospective?

To be honest, there is no one true way to run a retrospective, and if you try to run them the same way all the time, everyone will just get tired of doing them. They become stale, boring and generally lose their effectiveness very quickly. Try to switch it up regularly, to keep it fresh and interesting for everyone involved (including you!).

Anyway, the goal is simply to facilitate reflective discussions, and any mechanism to do that is acceptable. In fact, the more unusual the mechanism (as long as its understandable), the better the results are likely to be, because it will take people out of their comfort zone and encourage them to think in new and different ways.

To rebound somewhat from the effectively infinite space of “anything is a retrospective!”, I’m going to outline two specific approaches that can be used to facilitate the process.

The first is very stock standard, and relies of bucketing points into 3 distinct categories, what went well, what could we do better and any open questions.

The second is more visual, and involves drawing a chart of milestones and overall happiness during the iteration.

There’s a Hole In The Bucket

The buckets approach is possibly the most common approach to retrospectives, even if the names of the buckets change constantly.

The first bucket (what went well) is focused on celebrating successes. Its important to begin by trying to engage with everyone involved on the victories that were just achieved, because otherwise retrospectives can become very negative very quickly. This is a result of most people naturally focusing on the bad things that they would like to see fixed. In terms of self improvement, the results of this question provide reinforcement for anything currently being done (either a new idea as a result of a previous retrospective or because it was always done).

The second bucket (what could we do better) is focused on stopping or redirecting behaviours that are not helping. You will often find the most feedback here, for the same reason I mentioned above (focusing on negatives as improvement points), so don’t get discouraged if there is 1 point in the first bucket and then 10 in the second. This is where you can get into some extremely useful discussion points, assuming everyone is engaged in the process. Putting aside ego is important here, as it can be very easy for people to accidentally switch into an accusatory frame of mind (“Everything I did was great, but Bob broke everything”), so you have to be careful to steer the discussion into a productive direction.

The final bucket (any open questions) is really just for anything that doesn’t fit into the first two buckets. It allows for the recording of absolutely anything that anyone has any thoughts about, whether it be open questions (“I don’t understand X, please explain”) or anything else that might be relevant.

After facilitating discussion of any points that fit into the buckets above, the final step is to determine at least one action for the next iteration. Actions can be anything, but they should be related to one of the points discussed in the first part of the retrospective. They can be simple (“please write bigger on the whiteboard”) or complex (“we should use a random approach for presenting the results of our activities”), it really doesn’t matter. Actions are a concrete way to accomplish the goal of self-improvement (especially because they should have an owner who is responsible for making sure they occur), but even having a reflective discussion can be enough to increase engagement and encourage improvement.

There’s No Emoticon For What I’m Feeling!

The visual approach is an interesting one, and speaks to people who are more visually or feelings oriented, which can be useful as a mechanism of making sure everyone is engaged. Honestly, you’ll never be able to engage everyone at the same time, but if you keep changing the way you approach the retrospective, you will at least be able to engage different sections of the audience at different times, increasing the total amount of engagement.

It’s simple enough. Draw a chart with two axes, the Y-axis representing happiness (sad, neutral, happy) and the X-axis representing time.

Canvas the audience to identify milestones within the time period (to align everyone), annotate the X-axis with those milestones and then get everyone to draw a line that represents their level of happiness during the time period.

As people are drawing their lines, they will identify things that made them happy or sad (in a relatively organic fashion), which should act as triggers for conversation.

At the end, its ideal to think about some actions that could be taken to improve the overall level happiness, similar to the actions that come from the bucket approach.

Am I Pushing Against The Mirror, Or Is It Pushing Against Me?

Running retrospectives as part of workshops is not all puppies and roses though.

A retrospective is most effective when the group is small (5-9 people). In a classroom of 50+ students, there are just too many people to facilitate. That is not to say that you won’t still get some benefit from the process, its just much harder to get a high level of engagement across the entire audience when there are so many people. In particular, the visual approach I outlined above is almost impossible to do if you want everyone to participate.

One mechanism for dealing with this is to break the entire room into groups, such that you have as many groups as you normally would individuals. This can make the process more manageable, but does decrease individual participation, which is a shame.

Another problem that I’ve personally experienced is that the positioning of the retrospective at the end of the workshop can sometimes prove to be its undoing. As time progresses, and freedom draws closer, it can become harder and harder to maintain focus in a classroom. In a normal agile environment where retrospectives bookend iterations (i.e. the next iteration starts shortly after the previous one ends and the retrospective occurs at that boundary), and where there is no appreciable delay between one iteration and the next, this is not as much of a problem (although running a retrospective from 4-5 on a Friday is damn near impossible, even in a work environment). When there is at least a week between iterations, like there is with workshops, it can be very hard to get a good retrospective going.

Last but not least, it can be very hard to get a decent retrospective accomplished in a short amount of time, and I can’t afford to allocate too much during the workshop.

When running a two week iteration, its very normal to put aside a full hour for the retrospective. Even then, this is a relatively small amount of time, and retrospectives are often at risk of running over (aggressive timeboxing is a must). When running a workshop of 2 hours, I can only realistically dedicate 5-10 minutes for the retrospective. It can be very hard to get everyone in the right mindset to get a good discussion going with this extremely limited amount of time, especially when combined with the previous point (lack of focus dur to impending freedom).

Aziz Light!

You can see some simple retrospective results in the image gallery below.

IAB304 - S1 2016 - Retrospectives

The first image is actually not related to retrospectives at all, and is the social contract that the class came up with during the very first week (to baseline our interactions and provide a reference point for when things are going poorly), but the remainder of the pictures show snapshots of the board after the end of every workshop so far.

What the pictures don’t show is the conversations that happened as a result of the retrospective, which were far more valuable than anything being written. It doesn’t help that I have a natural tendency to not focus on documentation, and to instead focus on the people and interactions, so there were a lot of things happening that just aren’t recorded in those photos.

I think the retrospectives really help to increase the amount of engagement the students have with the teaching process, and really drive home the point that they have real power in changing the way that things happen, in an immediately visible way.

And as we all know, with great power, comes great responsibility.

0 Comments

In Part 1 of this two parter, I outlined the premise of Scrum City and some of the planning required (in the context of running the game as part of the tutoring work I do for QUT’s Agile Project Management course). Go read that if you want more information, but at a high level Scrum City is an education game that demonstrates many of the principles and practices of Scrum.

The last post outlined the beginning of the activity:

  • Communicating the vision, i.e. “I want a city! In lego! Make it awesome!”
  • Eliciting requirements, i.e “What features do you want in your model?”
  • Estimation, i.e. “Just how much will this cost me?”
  • Prioritization, i.e. “I want this before this, but after this”

With the above preparation out of the way, all that’s left is the delivery, and as everyone already knows, delivery is the easiest part...

Executions Can Be Fun

At this point, each team participating in the game (for me that was the two opposing sides of the classroom), should have a backlog. Each backlog item (some feature of the city) should have appropriate acceptance criteria, an estimate (in story points) and some measure of how important it is in the scheme of things. If the team is bigger than 5 people (mine were, each had about 25 people in it), you should also break them down into sub-teams of around 5 people (it will make it easier for the teams to organise their work).

Scrum City consists of 3 iterations of around 20 minutes each. For instructive purposes, the first iteration usually goes for around 30 minutes, so that you have time to explain various concepts (like planning and retrospectives) and to reinforced certain outcomes.

Each iteration consists of 3 phases; planning, execution and retrospective, of length 5, 10 and 5 minutes respectively. Make it very clear that no resources are to be touched during planning/retrospective and that work not present in the delivery area (i.e. the city) at the end of an iteration will not be accepted.

All told, you can easily fit the second half of Scrum City into a 90 minute block (leaving some time at the end for discussion and questions).

Obviously you will also need some physical supplies like lego, coloured pens/pencils and paper. Its hard to measure lego, so just use your best guess as to how much you’ll need, then double it. It doesn’t hurt to have too much lego, but it will definitely hurt to have too little. A single pack of coloured pens/pencils, 10 sheets of A4 paper and a single piece of A2 will round out the remaining resources required.

Don’t Fire Until You See the Whites of Their Eyes

Each iteration should start with a short planning session.

The goal here is to get each team to quickly put their thoughts together regarding an approximate delivery plan, allocating their backlog over the course of the available iterations. Obviously, 5 minutes isn’t a lot of time, so make sure they focus on the impending iteration. This is where you should reinforce which items are the most important to you (as the product owner) so that they get scheduled sooner rather than later.

Of course, if the team insists on scheduling something you don’t think is important yet, then feel free to let them do it and then mercilessly reject it come delivery time. Its pretty fun, I suggest trying it at least once.

With priorities and estimates in place from the preparation, planning should be a relatively smooth process.

After each team has finished, note down their expected velocity, and move onto the part where they get to play with legos (I assure you, if people are even the tiniest bit engaged, they will be chomping at the bit for this part).

Everything Is Awesome!

The execution section of each iteration should be fast and furious. Leave the teams to their own devices, and let them focus on delivering their committed work (just like a real iteration).

You will likely have to field questions at this point, as it is unlikely the acceptance criteria will be complete (which is fine, remember stories are invitations to conversations, not detailed specifications).

For iterations past the first, you should complicate the lives of the teams by introducing impediments, like:

  • In iteration two you should introduce an emergency ticket. You’ve decided that it is a deal breaker that the city does not have a prison (or other building), and you require some capacity from the team to prepare the story. You can then require that this ticket be completed during the next iteration.
    • Interestingly enough, the scrum master of one of my teams intercepted and redirected my attempt to interrupt one of his sub-teams with the prison story. It was very well done and a good example of what I would expect from a scrum master.
  • In iteration three, you should pick some part of a teams capacity (pick the most productive one) has all come down with the bubonic plague and they will not be available for the remainder of this iteration.

Once the team runs out of time (remember, 10 minutes, no extensions, tightly timeboxed), its time for a showcase.

That means its time for you, as product owner, to accept or reject the work done.

This is a good opportunity to be brutal on those people who did not ask enough questions to get to the real meat of what makes a feature acceptable. For example, if no-one asked whether or not a one storey house had to be all one colour, you can brutally dismiss any and all houses that don’t meet that criteria.

The first iteration is an extremely good time to really drive the importance of acceptance criteria home.

Once you’ve accepted some subset of the work delivered, record the velocity and explain the concept so that the teams can use it in their next planning session. Most groups will deliver much less than they committed to in the first iteration, because they overestimate their ability/underestimate the complexity of the tickets, especially if you are particularly brutal when rejecting features that don’t meet your internal, unstated criteria.

Think Real Hard

After the intense and generally exhausting execution part of each iteration, its time for each team to sit back and have a think about how they did. Retrospectives are an integral part of Scrum, and a common part of any agile process (reflection leading to self-improvement and all that).

Being that each team will only have around 5 minutes to analyse their behaviour and come up with some ways to improve, it may be helpful to provide some guided questions, aimed at trying to get a single improvement out of the process (or a single thing to stop doing because it hurt their performance).

Some suggestions I’ve seen in the past:

  • Instead of waiting until the end to try and delivery the features, deliver then constantly throughout the iteration.
  • Allocate some people to organising lego into coloured piles, to streamline feature construction.
  • Constantly check with the product owner about the acceptability of a feature.

Ideally each team should be able to come up with at least one thing to improve on.

The Epic Conclusion

After a mere 3 iterations (planning, execution, retrospective), each team will have constructed a city, and its usually pretty interesting (and honestly impressive) what they manage to get accomplished in the time available.

The two pictures below are the cities that my tutorial constructed during the activity.

S1 2016 Scrum City

Scrum City is a fantastic introduction to Scrum. It’s exceptionally good at teaching the basics to those completely unfamiliar with the process, and it touches on a lot of the more complex and subtle elements present in the philosophy as well. The importance of just enough planning, having good, self contained stories, the concepts of iterative development (which does not mean build a crappy house and then build a better one later, but instead means build a number of small, fully functional houses) and the importance of minimising interruptions are all very easy lessons to take away from the game, and as the organiser you should do your best to reinforce those learnings.

Plus, everyone loves playing with lego when they should be “working”.

0 Comments

The Agile Project Management course that I tutor at QUT (now known as IAB304) is primarily focused around the DSDM Agile Project Framework. That doesn’t mean that its the only thing the course talks about though, which would be pretty one sided and a poor learning experience. It also covers the Agile Manifesto and its history (the primary reason any of this Agile stuff even exists), as well as some other approaches, like Scrum, Kanban and the Cynefin Framework.

Scrum is actually a really good place to start the course, as a result of its relative simplicity (the entire Scrum Guide being a mere 16 pages), before delving into the somewhat heavier handed (and much more structured/prescriptive) DSDM Agile Project Framework.

Of course, Scrum fits in a somewhat different place than the Agile Project Framework. Scrum is less about projects and more about helping to prioritise and execute a constantly shifting body of work. This is more appropriate for product development rather than the planning, execution and measurement of a particular project (which will have its own goals and guidelines). That is not to say that you can’t use Scrum for project management, its just that it’s not written with that sort of thing in mind.

Often when tutoring (or teaching in general) it is far more effective to do rather than to tell. People are much more likely to remember something (and hopefully take some lessons from it), if they are directly participating rather than simply listening to someone spout all of the information. A simulation or game demonstrating some lesson is ideal.

Back in September 2014 I made a post about one such simulation, called Scrumdoku. In that post, I also mentioned a few other games/simulations that I’ve run before, one of which was Scrum City.

It is Scrum City that I will be talking about here, mostly because its the simulation that I just finished running over the first two tutorials of the semester,

A City of Scrums?

The premise is relatively simple.

The mayor of some fictitious city wants a scale model of their beloved home, so that they can display it to people and businesses looking to move there.

Those of you familiar with the concept of a Story will notice a familiar structure in that sentence.

As a (mayor), I want (a scale model of my city), so that (I can use it to lure people and businesses there).

Like all good stories, it comes with some acceptance criteria:

  • All items in the model must be made out of lego.
  • Roads, lakes and other geographical features may be drawn.
  • The model must fit on a single piece of A3 paper.

Now that I’ve set the stage, I’ll outline the process that I went through to run the game over the first two tutorials.

Breaking Up Is Hard

I have approximately 55 students in my tutorial. For most of the work during the semester, I break them into groups of 5 or less (well, to be more accurate I let them self-select into those groups based on whatever criteria they want to use).

For Scrum City, 11 groups is too many. For one thing, I don’t have that much lego, but the other reason is that acting as the product owner for 11 different groups is far too hard and I don’t have that much energy.

The easy solution would be to try and run one single Scrum City with all 11 teams, but the more enjoyable solution is to pit one half of them against the other. Already having formed teams of 5 or less, I simply allocated 4 of the larger teams to one side and the remaining teams to the other.

Each side gets the same amount of materials, and they both have approximately the same number of people, so they are on even ground.

The Best Laid Plans

The very first thing to do is to get each group to elect/self-select a Scrum Master. For anyone familiar with Scrum, this is a purely facilitative role that ensures the Scrum ceremonies take place as expected and that everyone is participating appropriately. It helps to have a Scrum Master, even for this short activity, because they can deal with all the administrative stuff, like…

The creation of the backlog.

Creating the backlog is a good opportunity to explain how to create a good story, and the concept of acceptance criteria. Each group needs to figure out a way to farm out the production of around 40 stories, each describing a feature that you (as the mayor) wants to be included in the model. Participants should be encouraged to ask questions like what it means for a one story house to be done (i.e. how big, to scale, colour, position, etc) and then note down that information on an appropriate number of index cards (or similar).

Each backlog should consist of things like:

  • 5+ one storey houses (as separate cards)
  • 5+ two storey houses (again, separate cards)
  • 5+ roads (of varying types, straight, intersections, roundabout, etc)
  • hospital
  • stadium
  • statue
  • bridge
  • police station
  • fire station
  • …and so on

It doesn’t actually matter what elements of a city the stories describe, it’s whatever you would prefer to see represented in your city.

Spend no more than 30 minutes on this.

As Big As Two Houses

Once the backlog is complete, its time for the part of planning that I personally hate the most. Estimation.

That’s not to say that I don’t understand the desire for estimates and how they fit into business processes. I do, I just think that they are often misinterpreted and I get tired of being held to estimates that I made in good faith, with a minimum of information, in the face of changing or misunderstood requirements. It gets old pretty quickly. I must prefer a model where the focus is on delivery of features as quickly as possible without requiring some sort of overarching timeframe set by made up numbers.

Each group will be responsible for ensuring that every story they have is estimated in story points. A lot of people have trouble with story points (especially if they are used to estimating in hours or days or some other measure of real time), but I found that the students were fairly receptive to the idea. It helps to give them a baseline (a one storey house is 2 points) and then use that baseline to help them establish other, relative measures (a two storey house is probably 4 points).

There are a number of different ways to do estimation on a large number of stories, but I usually start off with Planning Poker and then when everyone gets tired of that (which is usually with a few stories), move over to Affinity Estimation.

Planning Poker is relatively simple. For each story, someone will read out the content (including acceptance criteria). Everyone will then have a few moments to gather their thoughts (in silence!) and then everyone will show their estimate (fingers are a good way to this with a lot of people). If you’re lucky, everyone will be basically in the same ballpark (+- 1 point doesn’t really matter), but you want to keep an eye out for dissenters (i.e. everyone things 4 but someone says 10). Get the dissenters to explain their reasoning, answer any additional questions that arise (likely to clarify acceptance criteria) and then do another round.

Do this 2-3 times and the estimates should converge as everyone gets a common understanding of the forces in play.

Planning Poker can be exhausting, especially during the calibration phase, where there is a lot of dissent. Over time it usually becomes easier, but we’re talking weeks for a normal team.

Once 3-4 stories have been estimated using Planning Poker, its time to switch to Affinity Estimating.

Spread out all of the unestimated stories on a table or desk, place stories that have been estimated on a wall in relative positions (i.e. the 2 pointers at one end, 10 pointers at the other with room in between) and then get everyone in the group to silently (this is important) move stories to the places where they thing they belong, relative to stories with known estimates. Everyone story should have an estimate within about 5 minutes.

Keep an eye on stories that constantly flip backwards and forwards between low and high estimates, because it usually means those stories need to be talked about in more detail (probably using Planning Poker).

Affinity Estimating is an incredibly efficient way to get through a large number of stories and give them good enough estimates, without having to deal with the overhead of Planning Poker.

Again, spend no more than 30 minutes on this.

What’s Important To Me

The final preparation step is prioritization.

Luckily, this is relatively simple (and gets somewhat repeated during the planning sessions for each Scrum City iteration).

As the mayor (i.e. the product owner), you need to provide guidance to each team as to the relative importance of their stories, and help them to arrange their backlog as appropriate.

Generally I go with basic elements first (houses, roads, etc), followed by utilities (hospital, school, police station, etc) followed by wow factor (statue, stadium, parks, lake, etc). Its really up to you as product owner to communicate the order of importance.

You can (even though it is not Scrum) introduce the concept of MoSCoW here (Must Have, Should Have, Could Have, Won’t Have) and label each story appropriately.

The most important thing to have at the end is some measure of the priority of each story, so that when the teams start planning for their iterations, they can create a basic delivery plan taking your preferences into account.

Because the prioritization is less of a group activity than the others, you only really need to spend around 10-15 minutes on this.

To Be Continued

This post is getting a little long, and I’m only about half way through, so I’ll continue it next week, in Everyone Loves Lego: Part 2.

There will even be pictures!

0 Comments

As you may (or may not) have noticed, I’ve published exactly zero blog posts over the last 3 weeks.

I was on holidays, and it was glorious.

Well, actually it was filled with more work than I would like (both from the job I was actually on holidays from as well as some other contract work I do for a company called MEDrefer), but it was still nice to be the master of my own destiny for a little while.

Anyway, I’m back now and everything is happening all at once, as these things sort of do.

Three things going on right now: tutoring at QUT, progress on the RavenDB issue I blogged about and some work I’m doing towards replacing RavenDB altogether (just in case), and I’ll be giving those items a brief explanation below. I’ve also been doing some work related to incorporating running Webdriver IO tests from TeamCity via Powershell (and including the results) as well as fixing an issue with Logstash on Windows where you can’t easily configure it to not do a full memory dump whenever it crashes (and it crashes a lot!).

Without further ado, on with the show!

How Can I Reach These Kids?

Its that time of the year when I start up my fairly regular Agile Project Management Tutoring gig at QUT (they’ve change the course code to IAB304 for some ungodly reason this semester, but its basically the same thing), so I’ve got that to look forward to. Unfortunately they are still using the DSDM material, but at least its changed somewhat to be more closely aligned to Scrum than to some old school project management/agile hybrid.

QUT is also offering sessional academics workshops on how to be a better teacher/tutor, which I plan on attending. There are 4 different workshops being run over the next few months, so I might follow each one with a blog post outlining anything interesting that was covered.

I enjoy tutoring at QUT at multiple levels, even if the bureaucracy there drives me nuts. It gives me an opportunity to really think about what it means to be Agile, which is always a useful though experiment. Meeting and interacting with people from many diverse backgrounds is also extremely useful for expanding my worldview, and I enjoy helping them understand the concepts and principles in play, and how they benefit both the practitioner and whatever business they are trying to serve.

The Birds is the Word

The guys at Hibernating Rhinos have been really helpful assisting me with getting to the bottom of the most recent RavenDB issue that I was having (a resource consumption issue that was preventing me from upgrading the production servers to RavenDB 3). Usually I would make a full post about the subject, but in this particular case it was mostly them investigating the issue, and me supplying a large number of memory dumps, exported settings, statuses, metrics and various other bits and pieces.

It turns out the issue was in an optimization in RavenDB 3 that caused problems for our particular document/workload profile. I’ve done a better explanation of the issue on the topic I made in the RavenDB Google Group, and Michael Yarichuk (one of the Hibernating Rhinos guys I was working with) has followed that up with even more detail.

I learned quite a few things relating to debugging and otherwise inspecting a running copy of RavenDB, as well as how to properly use the Sysinternals Procdump tool to take memory dumps.

A short summary:

  • RavenDB has stats endpoints which can be be hit via a simple HTTP call. {url}/stats and {url}/admin/stats give all sorts of great information, including memory usage and index statistics.
    • I’ve incorporated a regular poll of these endpoints into my logstash configuration for monitoring our RavenDB instance. It doesn’t exactly insert cleanly into Elasticsearch (too many arrays), but its still useful, and allows us to chart various RavenDB statistics through Kibana.
  • RavenDB has config endpoints that show what settings are currently in effect (useful for checking available options and to see if your own setting customizations were applied correctly). The main endpoint is available at {url}/debug/config but there are apparently config endpoints for specific databases as well. We only use the default, system database, and there doesn’t seem to be an endpoint specific to that one.
  • The sysinternals tool procdump can be configured to take a full memory dump if your process exceeds a certain amount of usage. procdump –ma –m 4000 w3wp.exe C:\temp\IIS.dmp will take a full memory dump (i.e. not just handlers) when the w3wp process exceeds 4GB of memory for at least 10 seconds, and put it in the C:\temp directory. It can be configured to take multiple dumps as well, in case you want to track memory growth over time.
    • If you’re trying to get a memory dump of the w3wp process, make sure you turn off pinging for the appropriate application pool, or IIS will detect that its frozen and restart it. You can turn off pinging by running the Powershell command Set-ItemProperty "IIS:\AppPools\{application pool}" -name processmodel.pingingEnabled -Value False. Don’t forget to turn it back on when you’re done.
  • Google Drive is probably the easiest way to give specific people over the internet access to large (multiple gigabyte) files. Of course there is also S3 (which is annoying to permission) and FTP/HTTP (which require setting up other stuff), but I definitely found Google Drive the easiest. OneDrive and DropBox would also probably be similarly easy.

Once Hibernating Rhinos provides a stable release containing the fix, it means that we are no longer blocked in upgrading our troubled production instance to the latest version of RavenDB, which will hopefully alleviate some of its performance issues.

More to come on this topic as it unfolds.

Quoth The Raven, Nevermore

Finally, I’ve been putting some thought into how we can move away from RavenDB  (or at least experiment with moving away from RavenDB), mostly so that we have a backup plan if the latest version does not in fact fix the performance problems that we’ve been having.

We’ve had a lot of difficulty in simulating the same level and variety of traffic that we see in our production environment (which was one of the reasons why we didn’t pick up any of the issues during our long and involved load testing), so I thought, why not just deploy any experimental persistence providers directly into production and watch how they behave?

Its not as crazy as it sounds, at least in our case.

Our API instances are hardly utilised at all, so we have plenty of spare CPU to play with in order to explore new solutions.

Our persistence layer is abstracted behind some very basic repository interfaces, so all we would have to do is provide a composite implementation of each repository interface that calls both persistence providers. Only take the response from the one that is not experimental, and everything is golden. As long as we log lots of information about the requests being made and how long they took, we can perform all sorts of interesting analysis without ever actually affecting the user experience.

Well, that’s the idea anyway. Whether or not it actually works is a whole different question.

I’ll likely make a followup post when I finish exploring the idea properly.

Summary

As good as my kinda-holidays were, it feels nice to be back in the thick of things, smiting problems and creating value.

I’m particularly looking forward to exploring a replacement for RavenDB in our troublesome service, because while I’m confident that the product itself is solid, it’s not something we’re very familiar with, so we’ll always be struggling to make the most of it. We don’t use it anywhere else (and are not planning on using it again), so its stuck in this weird place where we aren’t good at it and we have low desire to get better in the long run.

It was definitely good to finally get to the bottom of why the new and shiny version of RavenDB was misbehaving so badly though, because most of the time when I have a problem with a product like that, I always assume its the way I’m using it, not the product itself.

Plus, as a general rule of thumb, I don’t like it when mysteries remain unsolved. It bugs me.

Like why Firefly was cancelled.

Who does that?