The fine art of being precise

Jon Bach this morning wrote a post about how we need to be precise in our thinking. Thank you Jon, its a lovely honest piece with lots of wisdom.  But it got me thinking how sometimes precision can let us down too.

For instance, we can get fooled into thinking that being precise always matters. There are many situations where vagary(what a wonderful word!) is incredibly useful.

When my husbands asks me how my day was, I don’t reply with “What do you mean by day?”, instead I typically respond with ‘fine’ or something equally inane.  What’s important here is not the precision of the question or even the precision of the answer. My husband’s not that interested in my day at all but it’s his way of asking “are you ok?”.  My answer though perhaps a little short, is important too, though it’s not really the answer that matters, its the tone of my answer that he’s listening out for.

You see this vagary in software teams that work closely together.  Over time, these teams have developed their own language and don’t feel the need to question every definition. Team members pick up cues from body language and follow unwritten rules without much thought. I see this ability to follow such rules without question as a way of building trust. Often teams that work together for a while just ‘know’. They’ve built up a certain amount of tacit knowledge which doesn’t need to be openly discussed.

Unfortunately many of us have, at one point in time, worked in situations where this culture (for want of a better word) is not so healthy. I worked in one such company where open questioning was implicitly discouraged to the point where a developer worked on the wrong story for a whole iteration. I’ve seen many a tester battered and torn from attempting to pull down those unwritten walls of silence and ambiguity.

But what’s  important here is we recognise that in certain situations its appropriate for us to be loose in our language. In fact, I often hold off from being precise especially if I’m new to a team or client. Instead, I sit and listen, waiting for ambiguity to bubble up and emerge. This intentioned act of silence allows me to witness rather than be told where implicit assumptions may fester.

So while being able to be precise  is an important testing skill, another important one is the ability to identify when and where precision is most required, and when and where we can allow ourselves to be a little more accommodating.

 

 

Hanging up my boots

9361879745_d96db831dd_m

photo by shankar.s

My family and I  arrived in Dublin on a cold dark wintry November night in 2008. At least metaphorically speaking it was. It was right after the GFC and we watched a country and economy wrestle with the prospect of becoming bankrupt. Regardless, Dublin’s IT and entrepreneurial spirit flourished. People who were made redundant invested their expertise into new ventures. There was a sense of familiarity in being in a recession (The common term for the GFC was recession 2.0) but also that the future was very much up to themselves.

I decided to start an Irish testers meetup, an opportunity for testers to share their expertise and learn from each other. Our first meetup had 4 testers. It never grew beyond that. I discovered that there was more to running a meetup than post on a blog. I learned that testers cannot thrive on testing alone, and that having a nibble or two and a drink goes a long way to attracting turn out. Having noteworthy speakers also helps.

So when I returned to Australia in 2010, and heard of a Sydney Testers Meetup I was keen to join. I discovered a hardy band of testers. Amongst them were Trish Khoo, Marlena Compton and Bruce McLeod. But they had similar problem as I had in Dublin. It was hard to get the word out and attendance was poor.

Sponsorship by Softed and getting some few heavy hitting speakers quickly changed that. We had James Bach, Elizabeth Henrickson, Scott Barber come along and speak. Trish has fantastic ideas about different types of activities. We had games nights and book nights. Julietta Jung came along and brought along enthusiasm and excellent pizza ordering skills.

The Sydney Testers Meetup rapidly grew but not without its ups and downs. We’ve lost sponsorship, turned down sponsorship and for a while lived without any sponsorship. We had have committee members move to far away lands. But throughout it all we managed to maintain the spirit of the Sydney Testers Meetup. Today Attribute Testing sponsor the meetup that has a 600 strong membership.

It’s time now for me to hang up my boots as organiser of the meetup. It’s been a blast and I’ve loved seeing people become infected by testing. I’m leaving the STM in safe hands. Richard Robinson and Devesh Maheshwari will be taking over as organisers. I wish them the best and look forward to seeing the STM do great things.

Scientia potentia est

“Knowledge is power”

Jerry Weinberg cites courage as the most important trait in a tester. Quoting Kipling, Jerry says testers need to “keep your head when all about you are losing theirs and blaming it on you.”

But I see a another type of courage at play in software testing. Testers are foremost learners. Through enquiry,they learn about a system. The information they gather facilitates many kinds of decision making from releasing to designing new features.

Observe great testers and you will discover an insatiable desire to learn more, not only about the product, but the world around us, often incorporating what they learn into their testing.

For many of us, discovering that our learning is within our control and within our means, can itself be a road of discovery. It takes courage to start that journey, but it also take courage to continue along its path.

Young Luke Skywalker found the ‘Force’ early on in life. Yoda helped him connect with that power, but even then, he needed guidance and a reminder to ‘Use the Force’ when up against the evil Empire.

Some of us need that reminder now and then. We know we have the ability to learn, and we know of its power, but we forget to use it. Especially when things get tough.

“Knowledge is power”

When things don’t go the way you want, when the pressures of daily life cloud your ambitions and goals, it can be easy to lose site of learning. Here’s what I’ve discovered though, through focusing on learning in these times you gain great strength.

Will the actual knowledge you learn help you succeed? Perhaps. What really counts is that through learning comes power in the form of ownership and self belief. You may not be able to control the situation at hand, but through being open to and owning your learning, you regain a sense of control and a sense of focus.

So when the dog bites, when the bee stings and when you’re feeling sad, remember there is solace in learning. It’s not only as an escape, or way of learning how to deal with the situation, but helps you take ownership and responsibility over the next step. Who knows, learning may be the just the ticket you need to recharge those batteries, giving you the juice to continue on your journey or perhaps, dump it for a different destination.

This post was first published on medium. Mauri Edo tweeted about it recently and I’ve decided to post it here too because I like it so much. 

Why waterfall kicks ass

I read a blog post about why waterfall is NEVER the right approach and I feel compelled to respond to what’s touted as the waterfall mindset. Here’s a copy of the paragraph, but you can read the whole post on the above link to get a better sense of context.

I actually don’t believe adopting waterfall as an approach is ever a good choice.  Waterfall comes with the following mindset:

  • we don’t need feedback between the steps of requirements, analysis, design, code, test
  • we can hand work off
  • big batches are ok since they enable us to be more efficient
  • specialized skills working only on their specialty is good
  • we can understand the work to be done before we do it
  • written requirements can specify what we need

Putting aside for now, the use of absolutes, lets address this waterfall mindset:

1) we don’t need feedback between the steps of requirements, analysis, design, code, test

I’ve worked in both waterfall and agile over the years. In those ‘bad old days’ where no-one appreciated collaboration, we used to extensively review requirements. This meant that testers offered valuable input into requirements before any piece of code was developed. Since the invention of ‘agile’ almost everyone has discovered the 3 amigos, but honestly, this is not an agile concept, it existed way before agile was even thought of.

2) we can hand work off

Honestly, I don’t understand this sentence. I’m serious. I can think of many reasons why handing work to others is a good thing. For instance, if I’ve got too much work I’m in danger of becoming a bottleneck, and I hand some of my work over to someone else. If that’s a waterfall mindset, its one I like to have.

3) big batches are ok since they enable us to be more efficient

What do you mean by efficient here? Does it mean quicker, better quality, less waste? Efficient in what? Design, Writing code? Testing? Support? If I wish to carry 20 oranges from point A to point B, is it more efficient to carry them one at a time, or do I get a bag and carry them all together?Try delivering to a customer 1/4 of an IC circuit and request feedback? Sometimes delivering something in one batch IS the the more efficient way to proceed.

In waterfall, we did have teams, and work was allocated into smaller isolated tasks. It was rare one person developed the whole product. In fact, when I worked on telecommunication systems in the nineties, the concept of frameworks was being introduced, segregating data from its transportation method, allowing for people to work on separate parts of a system in parallel to each other.

4) specialized skills working only on their specialty is good

When I started working at Nortel Networks in 1994 it was all waterfall, except we didn’t call it waterfall then, we called it software development. Nortel Networks had a policy that encouraged developers and testers to spend time working in each others ‘specialisation’ area. For six months I became a developer working to deliver software. I was taught C++ and object oriented design principles, so I’m uncertain why you think this is a waterfall mindset?

5) we can understand the work to be done before we do it

Why is being able to understand work before you begin considered a ‘bad’ idea? I think this phrase is too ambiguous to be able to discuss with any merit.

6) written requirements can specify what we need

Is it the word requirement, or is it the fact that it’s written that makes this a ‘bad’ mindset? Yes we do have written requirements in waterfall. We also have written user stories in agile, so what is the point? Just because stories are written in confluence doesn’t make them any less written.

In the bad old waterfall days, I facilitated workshops with the business and IT teams to determine and understand risk. Lots of verbal collaboration, lots of whiteboard discussions.  I’ve worked in ‘agile’ teams were little communication takes place, no stand-ups nothing. In fact, one developer spent a week working on the wrong story without realising it.

I’ve worked on some fantastic waterfall projects that blew the socks of some really crappy agile teams I’ve worked with recently. In these waterfall projects, I’ve worked in harmony with great developers and testers, working closely together. We were afforded sufficient time to examine the system as a whole instead of its parts, something that these days appears to be a bit of a luxury. I’ve worked in environments that develop hardware, firmware and software where upfront design and ‘big batch’ systems thinking helped us understand that we were not merely developing code, but we were attempting to solve a problem.

Its easy to conflate poor software development practices with waterfall, just as its easy to conflate an agile approach with ‘good’ practice. To me the biggest change since those days are technological ones which allows us develop, integrate and compile it quickly and relatively cheaply. It’s the technology that has allowed us to develop in these small tasks that we are familiar with today. In the early nineties the concept of layering and isolating according to purpose was coming into play but we simply didn’t have the sophisticated systems that allowed us to develop in a way we wanted to.

A second important change was the conception of the agile manifesto. To me the agile manifesto is a stroke of genius. People who had the courage to espouse ideology that placed people over process, tools or artefacts. For many, this has changed how we think about developing software.

But lets not forget, that the agile manifesto was developed by people who worked in what you call a waterfall environment.  It seems to me these people had quite a different mindset than the one suggested above. It seems to me, those who developed the agile manifesto did so with ideas of collaboration and an emphasis on the ‘humanness’   of developing software. Do those ideas come about as a result of what was done badly in waterfall or because of what they saw was working well?

I suspect it was a little of both.

Don’t get me wrong. Lots of mistakes were made pre agile days. There was an idea of segregation between developer and tester in an attempt to avoid bias from developers. I’m glad we’ve gotten over that idea. But many of the mistakes we made in those days we’re still making now. Most software teams fail to understand testing and attempt to measure it using means that appear to have no direct correlation to quality. Many teams use measurement to lock down estimates as opposed to using them as a source of information for change. Go to an agile conference and count the number of talks on process and methodologies and frameworks. Look at today’s obsession with continuous delivery as a process. What happened to the people folks?

It’s easy to understand why. The agile manifesto is bloody hard to implement. Its much easier to point at a tool or a process and say “thats what we do” because its explicit, it’s easy to see. Programming & testing are human activities and are much harder to identify and talk about. It’s hard to describe and transfer these skills. The best way I know of is to actually perform the tasks.

We need a little more humility in acknowledging the great shoulders that agile stands on. Its simplistic to identify in hindsight a ‘waterfall’ mindset. Such a thing did not exist. Instead lets view them as the agile manifesto encourages us to do, to view them as people attempting to deliver quality software just like we are attempting to do today.

Are you serious?

Seriously, how serious are you about testing? Lets presume you study the craft of testing. Does that make you a serious tester?

If you answer yes to this question, congratulations. You are on your way to becoming a serious tester. Now ask yourself this question:

“Do you take yourself seriously?”

If you want to be taken seriously about testing, you need to take yourself seriously. Note the difference here. Taking yourself seriously is much larger than being a serious tester.

Taking yourself seriously means you avoid behaviour and thinking such as:

“I will put myself down in front of others to make them feel better about themselves”
“I resort to behaving childishly when placed in pressure situations”
“I hand over power in order to avoid conflict”
“I feel like a fake even though my actions demonstrate otherwise”

I know this, because its only recently I realised that I haven’t been taking myself so seriously.

When you stop speaking to yourself in such a way and start taking yourself seriously, a wonderful thing happens. You start to believe in yourself. In fact, you have to. You owe it to yourself to do so.

I’ve discovered a new strength in this self belief. It means I have courage and strength to stand up for what I want. In doing so, I give myself the respect and honour for all the hard work I have put in.

So, let me ask the question again: Do you take yourself seriously?

Expression epitomised in Rage comics following a David Silverman interview of Fox News by Bill O’Reilly 

Are we there yet?

Ever been in a long car journey packed with young kids? I remember these eight hour drives with six kids crammed into a car driving to our holiday destination. My younger brother in particular was annoying, constantly asking questions and irritating his sisters. By far the most irritating question (especially to our parents) was: Are we there yet? This line of questioning would normally begin half an hour into the journey, and would be constantly repeated throughout the journey.

This potent cocktail of repetitive questioning usually resulted in one of my parents  (usually my Dad) exploding in frustration, yelling at us all to be quiet. This was typically followed by a threat of being dumped on the side of the road. (He actually carried out on his threat once, dumping my brother, who unfazed promptly hid in some nearby field, resulting in the whole car load having to search for him.)

But it was a fair question from us kids. We had no real sense of time or distance to help gauge how far we had come and had to go. We also were totally bored with no iPods or stuff to entertain us. Singing (very Von Trapp like) took us only so far. Counting number plates helped a little. And remember, we were going on holidays, the mere thought conjured up more excitement than our poor little bodies could hold. The journey was always going to be arduous when faced with a destination the held so much promise.

As a parent myself, I have a little more sympathy for my parents. Of course, we have the luxury of allowing our kids to be immersed in some tacky iPod game, distracting them to the point they forget they are going on holidays. But I get it. I get that its really hard to explain to someone with little understanding of distance or time, how long something is going to take.

When testers ask me how we know when we are done in Exploratory Testing, I am faced with a similar challenge. How do I help a tester understand when they are done? Pointing them to the excellent list of Stopping Heuristics on Michael Bolton’s blog helps but how do you apply them? Some of them are easier than others. For example, with the “Time’s up!” heuristic, its pretty simple to apply. But take something like the Flatline Heuristic. The Flatline heuristic tells us to stop when “No matter what we do, we’re getting the same result.” But as Michael points out, there are hidden risks to this. For example, it may be that there is no new information, but it also may mean we are insufficiently explored the application in depth.

In this situation, how do we know what to do?

Like many things in testing, there is no clear cut answer to this. A considered answer requires an understanding of what’s happening around testing, and the implications of the decision being made. Hence the inclusion of the word heuristic.

I’ve found a conversation with stakeholders around stopping heuristics *before* testing starts a useful exercise. Knowing that you have a time limit on your testing goes a long way to preventing tester angst about when to stop. Include in that conversation a discussion on what ‘done’ means including into that the impossibility of complete testing. Stakeholder who get that bugs may be missed can influence which stopping heuristics you use.

Like kids in the back seat, as we test we need to repeatedly ask ourselves the question “are we done yet?”.  It may be useful to use our emotions to trigger this question. For example, if I’m bored, does that mean I’m done – or does it mean I need to change something in my testing? If I’m anxious does that mean I’m about to hit some constraint in the form of time? If I’m angry, does that mean my information is being ignored and maybe I need to address that instead of raising a tsunami of bugs. If I’m confused, does that mean I need to explore more?  Emotions like these can be useful indicators of when to ask if your done. Again Michael Bolton has done lots of work in this area.

I’ve also found  that in times of deep uncertainty its always a good idea to draw on the “phone a friend” card and ask someone more experienced than you for their opinion. There’s no shame in this, these are tough questions to answer. An outsider may have insight or additional knowledge that you’ve overlooked.

Building relationships and credibility with those around will also go a long way to helping you in situations when that significant bug is missed. And as in most of testing, articulating your process and your decision making helps to demonstrate diligence and considered testing.

Question with sprinkle of humility on the side

Michael Bolton tweeted yesterday:

Why do testers insist on trying to be influential? I suspect part of the reason is that part of a tester’s job is to recognise problems. Too often , testers see that the *real* problem is not the software itself, but the process behind the software and go into bug prevention mode. That sort of change requires influence.

Often though, we simply don’t have the authority or the influence to do make change. We may moan and tear our hair out in frustration, but at the end of the day, without the mandate to make change and the influence to implement it, there’s little we can do to change an organisations culture or process.

Trying do do so regardless, can lead to a sense of helplessness and even slowly, over time, a sense of powerlessness. I suspect most of us at one time or another have felt like this. It’s not a great position to be in.

I know I’ve been there. I tried to change the culture of a company (company no less!) that had some  negative ideas about testing and teamwork in general. (I’m talking about this at Eurostar this year). As a consultant, I probably should have known better, but I argued (to myself) that the culture was affecting the tester’s ability to perform their job. It needed to change!

We  testers are gifted with keen observation skills and the nature of our role sometimes means we get to see and recognise problems that perhaps others don’t. But lets not get away with ourselves. Without a mandate for change, we become close to the schoolyard tale tattler, dobbing in on everyone and despised by all (including often the teacher). I failed to recognise that I hadn’t the authority or the influence to make these sorts of changes. I fell into a classic consultants trap. I really should have known better.

It’s not that we *should* or *should not* ignore these problems. Often these challenges are too complex to be solved with simple answers and probably need to be dealt with on a case by case basis. But I think adopting the tone of helper (or servant) goes a long way to contributing to an answer. In some cases a question sprinkled with a little humility can be more helpful than smothering the problem with large doses of  tester sauce.

I hope I’ll remember that next time!

Yu Han on what Software Testing means to me

Last year I had the honour of teaching post graduate students the subject of software testing at the University of Technology Sydney. I asked students if they would like to write a post on testing on my blog. Yu Han did, and here it is. 

The subject of Enterprise Software Testing demonstrated the existing and interesting aspects of testing. Those lectures and the project in Suncorp are unforgettable experiences.

During the classes, I felt that, most of time, I was acting like a child, playing different kinds of games and drawing pictures with colorful pens. But at the end, there would be some serious discussions, analyses and reporting always broke my sanity.

Those reflections make me realize that my actions and thoughts were more complicated than I could realise, which encouraged me to keep reviewing what I did and exploring how I thought. By doing that, I better understood the purpose behind those activities, and I am being able to sense the essence of what a good testing could be.

I am aware of that my mind prefers to use memory or imagination to fill the gap between the reality and the information once I have received. By linking their similarities, I can better understand the information. But as soon as I came to this stage, my thinking could stop going deeper. I may feel satisfied with the simple explanation, or could be distracted by other information which will draw my attention to something else. Then I may lose the chance to find out the depth meaning of the information or even misunderstand it.

Now I am interested in paying more attention on my flow of thinking. By questioning appeared ideas could be a way to slow it down, which may clarify the understanding or identify potential obstacles hiding behind. It could also be possible to bring back those ideas or considerations which were once brought up then been omitted or developed during the incredible thinking speed. Those ideas and considerations could be questionable as soon as I try to challenge them.

This could turn out that there are missing facts behind the imagination which I feel reasonable but not actually practical. Once I try to gain the evidence to support those ideas, I could realize they are incorrect or unproved. I may need to confirm them before I go further, especially when the goal is based on such ideas. Like those assumptions made in the Big-Track exercise, our group was needed to test our ideas of what other buttons can do, before we finally test how the targeted button works. We questioned our ideas but hard to move forward until we know which ones were correct. We came up with many hypothesis, but failed to prove them at once in practice. After we had some sort of understanding, we found out that we should confirm those assumptions before we continued, which led us to come up a debugging strategy to process our tests. It appears that questioning the information not only could encourage us to seek the truth but also could inspire us to develop our ideas. In this sense, critical thinking could help one to wisely seek information and to reorganize the information into knowledge for idea development.

This now also reminds me the concept of Exploratory Testing. In the previous example, we learned the mechanism, built the tests and proved the ideas all by interacting and exploring with the actual system. It seems that Exploratory Testing could be a natural way to build and run tests while we also need to learn about the system. By understanding how it works, we could come up with how to do the test and find bugs. However, it’s difficult to tell how much time I need to finish the task.

Thanks from the experience in Suncorp, I see the efficacy of Scripted Testing. Our team only performed the testing within several hours. We didn’t need to worry about anything else, knowing other parts of the system and even the depth of the targeted section. It did take a couple of weeks for us to learn the background information and to write the strategy and test scripts, but we could save most of the time if now we are going to test another section. It makes me feel that with a good development and a clear testing goal, the job can be easily done by writing and following test scripts. But it is true that I also feel difficult to understand the system by reading documents, having meetings and even watching demonstrations. It seems easier for me to concentrate and memorize such information when I can apply it, which means learning the system by actually using it.

I am inexperienced to conclude what a good testing is or which testing method is more superior, but running the system to get reliable information, questioning the information to dig insightful connections, finding actual evidence to support the thinking seem to be the correct manners in testing.

Thanks for this subject which let me feel the enjoyment of testing, and provided a real business environment to gain practical experience. I will be happy to get involved in such field and learn more in the future.

I’ll be teaching this subject at UTS again in February next year. The course is open to students and practitioners alike .

Dear Helen, in response to your job application as a tester

I wrote this email to Helen today who emailed me a letter seeking work. I thought it pretty much summarises what Testing Times stands for. 

Dear Helen,

Thanks for contacting me through my website.

Firstly, I want to congratulate you. Not many people bother to read my website properly and send me the information I require, my estimation of you as a tester as increased a notch!

I purposefully ask testers to only send me a cover letter explaining why they want to work for me. Most testers send in a resume. 

Testing Times is not a typical software testing consultancy. Based on many years of experience, we’ve come to realise that the majority of testing performed is rubbish. Unrealistic schedules, stupid estimations, test strategies and plans that no-one reads or cares about, metrics that make no sense.

We’ve decided that enough is enough, and we will only perform quality testing. If this type of consultancy interests you read on.

We see ourselves as context driven testers (google and read up) and we believe this is one way to promote quality testing.

Be aware, we only work with truly passionate testers. This doesn’t necessarily mean lots of technical proficiency, but we expect you to have studied testing, read blogs  and have an opinion on what quality testing is.

If this sounds like you, send your resume in!

Regards

Anne-Marie

Knowing the unknown

As a teenager, I remember being struck by the poignancy of the tomb of the unknown soldier. I’m not sure which tomb it was, in which city. I don’t recall there being one in Ireland, so perhaps it was in London. I remember feeling sad and perhaps a little understanding of the horror of war crept into me. For such a simple monument, the message was powerful.

For me, the tomb itself signified a lot more than missing soldiers. Somehow it symbolises those untold stories. Who was that person, why did they die? Was it painful, did they have a wife, children? it makes me think about history too, and how really its a story told by the victor. We rarely hear the story of the defeated. So many stories untold.

Roll on many, many years and I realise that we in software development, particularly in agile, we have many untold stories that only make the light of day when we find bugs in software. We fail to hear the stories that stakeholders wanted to say, but fell aside because of time pressure. We fail to hear stories that stakeholders have not realised exist. We fail to hear stories because some stakeholders weren’t seen as needed. We failed to tell the story because we simply didn’t think it was important enough.It’s a wonder software works at all!

Testing to helps us uncover and tell these untold stories. How? Each bug we find, has a story behind it. It may be story of why the bug came into being in the first place. Or the story may turn out to be unimportant. Often not only do we uncover stories,we also end up adding meat the stories that exist.

I find this particularly true of agile. The simplistic approach to stories that start with “As a <insert oversimplified description of user here>, I want <some oversimplified goal> so that I can <insert oversimplified ambiguous reason>” . I do understand that the point of these stories is to initiate conversation, but it’s my experience that often conversations are skipped and this skeleton like sentence ends up becoming the story. And as Allister Scott pointed out these stories would totally fail the bedtime story test performed by any 5 year old. Regardless these skeleton like stories then become converted into automated acceptance tests which significantly influences the decision to release or not. Well, in my experience anyhow.

As I understand it, one of the reasons for these simplistic stories is to achieve the goal of “dealing with the problem at hand”. By focusing on only what needs to be done now, we avoid over engineering a product. Now believe me, after working on some large scale telecommunications switches in the 80’s and 90’s, I totally appreciate this sentiment. However, the drive to simplicity has I think led to deficiencies in how we model our systems.

For example, we fail to recognise that firstly, some systems cannot (and perhaps should not) be modelled from a user perspective. (I’m happy to be proven wrong in this, my experience in testing r&d products and layer3 protocols leads me to believe this is the case though). Also by focusing on only the problem at hand we fail to appreciate the subtleties and impacts of the unknown unknowns.

I’d like to see software development attempting to re-address this balance. When I went to Agile Australia lots of people were talking about systems thinking and my first response was “Brilliant!” but it seemed that the systems thinking was focused primarily on business systems, as opposed to products. Perhaps its me with too narrow a focus, but I’d really like to see more discussion on how we can become better at modelling software in agile.For one thing, lets move away from telling stories ONLY from a user perspective. There are many ways to model a system and greater diversity of models may bring about deeper appreciation and understanding of the problem were trying to solve.

And so to the unknown solider. I’m glad no-one has tried to simplify his story into a sentence beginning with “As a soldier…”.  The monument conjured up more questions than answers, questions about the unknown. Questions that can never be answered. He was the unknown solider and it was fitting. i guess one question to ask is this:  is the unknown story a fit for agile?

The title for this post is inspired by Colin Cherry’s talk at KWST3 about Johari windows. [thanks for the spelling check Srinivas]