Scientia potentia est

“Knowledge is power”

Jerry Weinberg cites courage as the most important trait in a tester. Quoting Kipling, Jerry says testers need to “keep your head when all about you are losing theirs and blaming it on you.”

But I see a another type of courage at play in software testing. Testers are foremost learners. Through enquiry,they learn about a system. The information they gather facilitates many kinds of decision making from releasing to designing new features.

Observe great testers and you will discover an insatiable desire to learn more, not only about the product, but the world around us, often incorporating what they learn into their testing.

For many of us, discovering that our learning is within our control and within our means, can itself be a road of discovery. It takes courage to start that journey, but it also take courage to continue along its path.

Young Luke Skywalker found the ‘Force’ early on in life. Yoda helped him connect with that power, but even then, he needed guidance and a reminder to ‘Use the Force’ when up against the evil Empire.

Some of us need that reminder now and then. We know we have the ability to learn, and we know of its power, but we forget to use it. Especially when things get tough.

“Knowledge is power”

When things don’t go the way you want, when the pressures of daily life cloud your ambitions and goals, it can be easy to lose site of learning. Here’s what I’ve discovered though, through focusing on learning in these times you gain great strength.

Will the actual knowledge you learn help you succeed? Perhaps. What really counts is that through learning comes power in the form of ownership and self belief. You may not be able to control the situation at hand, but through being open to and owning your learning, you regain a sense of control and a sense of focus.

So when the dog bites, when the bee stings and when you’re feeling sad, remember there is solace in learning. It’s not only as an escape, or way of learning how to deal with the situation, but helps you take ownership and responsibility over the next step. Who knows, learning may be the just the ticket you need to recharge those batteries, giving you the juice to continue on your journey or perhaps, dump it for a different destination.

This post was first published on medium. Mauri Edo tweeted about it recently and I’ve decided to post it here too because I like it so much. 

Why waterfall kicks ass

I read a blog post about why waterfall is NEVER the right approach and I feel compelled to respond to what’s touted as the waterfall mindset. Here’s a copy of the paragraph, but you can read the whole post on the above link to get a better sense of context.

I actually don’t believe adopting waterfall as an approach is ever a good choice.  Waterfall comes with the following mindset:

  • we don’t need feedback between the steps of requirements, analysis, design, code, test
  • we can hand work off
  • big batches are ok since they enable us to be more efficient
  • specialized skills working only on their specialty is good
  • we can understand the work to be done before we do it
  • written requirements can specify what we need

Putting aside for now, the use of absolutes, lets address this waterfall mindset:

1) we don’t need feedback between the steps of requirements, analysis, design, code, test

I’ve worked in both waterfall and agile over the years. In those ‘bad old days’ where no-one appreciated collaboration, we used to extensively review requirements. This meant that testers offered valuable input into requirements before any piece of code was developed. Since the invention of ‘agile’ almost everyone has discovered the 3 amigos, but honestly, this is not an agile concept, it existed way before agile was even thought of.

2) we can hand work off

Honestly, I don’t understand this sentence. I’m serious. I can think of many reasons why handing work to others is a good thing. For instance, if I’ve got too much work I’m in danger of becoming a bottleneck, and I hand some of my work over to someone else. If that’s a waterfall mindset, its one I like to have.

3) big batches are ok since they enable us to be more efficient

What do you mean by efficient here? Does it mean quicker, better quality, less waste? Efficient in what? Design, Writing code? Testing? Support? If I wish to carry 20 oranges from point A to point B, is it more efficient to carry them one at a time, or do I get a bag and carry them all together?Try delivering to a customer 1/4 of an IC circuit and request feedback? Sometimes delivering something in one batch IS the the more efficient way to proceed.

In waterfall, we did have teams, and work was allocated into smaller isolated tasks. It was rare one person developed the whole product. In fact, when I worked on telecommunication systems in the nineties, the concept of frameworks was being introduced, segregating data from its transportation method, allowing for people to work on separate parts of a system in parallel to each other.

4) specialized skills working only on their specialty is good

When I started working at Nortel Networks in 1994 it was all waterfall, except we didn’t call it waterfall then, we called it software development. Nortel Networks had a policy that encouraged developers and testers to spend time working in each others ‘specialisation’ area. For six months I became a developer working to deliver software. I was taught C++ and object oriented design principles, so I’m uncertain why you think this is a waterfall mindset?

5) we can understand the work to be done before we do it

Why is being able to understand work before you begin considered a ‘bad’ idea? I think this phrase is too ambiguous to be able to discuss with any merit.

6) written requirements can specify what we need

Is it the word requirement, or is it the fact that it’s written that makes this a ‘bad’ mindset? Yes we do have written requirements in waterfall. We also have written user stories in agile, so what is the point? Just because stories are written in confluence doesn’t make them any less written.

In the bad old waterfall days, I facilitated workshops with the business and IT teams to determine and understand risk. Lots of verbal collaboration, lots of whiteboard discussions.  I’ve worked in ‘agile’ teams were little communication takes place, no stand-ups nothing. In fact, one developer spent a week working on the wrong story without realising it.

I’ve worked on some fantastic waterfall projects that blew the socks of some really crappy agile teams I’ve worked with recently. In these waterfall projects, I’ve worked in harmony with great developers and testers, working closely together. We were afforded sufficient time to examine the system as a whole instead of its parts, something that these days appears to be a bit of a luxury. I’ve worked in environments that develop hardware, firmware and software where upfront design and ‘big batch’ systems thinking helped us understand that we were not merely developing code, but we were attempting to solve a problem.

Its easy to conflate poor software development practices with waterfall, just as its easy to conflate an agile approach with ‘good’ practice. To me the biggest change since those days are technological ones which allows us develop, integrate and compile it quickly and relatively cheaply. It’s the technology that has allowed us to develop in these small tasks that we are familiar with today. In the early nineties the concept of layering and isolating according to purpose was coming into play but we simply didn’t have the sophisticated systems that allowed us to develop in a way we wanted to.

A second important change was the conception of the agile manifesto. To me the agile manifesto is a stroke of genius. People who had the courage to espouse ideology that placed people over process, tools or artefacts. For many, this has changed how we think about developing software.

But lets not forget, that the agile manifesto was developed by people who worked in what you call a waterfall environment.  It seems to me these people had quite a different mindset than the one suggested above. It seems to me, those who developed the agile manifesto did so with ideas of collaboration and an emphasis on the ‘humanness’   of developing software. Do those ideas come about as a result of what was done badly in waterfall or because of what they saw was working well?

I suspect it was a little of both.

Don’t get me wrong. Lots of mistakes were made pre agile days. There was an idea of segregation between developer and tester in an attempt to avoid bias from developers. I’m glad we’ve gotten over that idea. But many of the mistakes we made in those days we’re still making now. Most software teams fail to understand testing and attempt to measure it using means that appear to have no direct correlation to quality. Many teams use measurement to lock down estimates as opposed to using them as a source of information for change. Go to an agile conference and count the number of talks on process and methodologies and frameworks. Look at today’s obsession with continuous delivery as a process. What happened to the people folks?

It’s easy to understand why. The agile manifesto is bloody hard to implement. Its much easier to point at a tool or a process and say “thats what we do” because its explicit, it’s easy to see. Programming & testing are human activities and are much harder to identify and talk about. It’s hard to describe and transfer these skills. The best way I know of is to actually perform the tasks.

We need a little more humility in acknowledging the great shoulders that agile stands on. Its simplistic to identify in hindsight a ‘waterfall’ mindset. Such a thing did not exist. Instead lets view them as the agile manifesto encourages us to do, to view them as people attempting to deliver quality software just like we are attempting to do today.

Are you serious?

Seriously, how serious are you about testing? Lets presume you study the craft of testing. Does that make you a serious tester?

If you answer yes to this question, congratulations. You are on your way to becoming a serious tester. Now ask yourself this question:

“Do you take yourself seriously?”

If you want to be taken seriously about testing, you need to take yourself seriously. Note the difference here. Taking yourself seriously is much larger than being a serious tester.

Taking yourself seriously means you avoid behaviour and thinking such as:

“I will put myself down in front of others to make them feel better about themselves”
“I resort to behaving childishly when placed in pressure situations”
“I hand over power in order to avoid conflict”
“I feel like a fake even though my actions demonstrate otherwise”

I know this, because its only recently I realised that I haven’t been taking myself so seriously.

When you stop speaking to yourself in such a way and start taking yourself seriously, a wonderful thing happens. You start to believe in yourself. In fact, you have to. You owe it to yourself to do so.

I’ve discovered a new strength in this self belief. It means I have courage and strength to stand up for what I want. In doing so, I give myself the respect and honour for all the hard work I have put in.

So, let me ask the question again: Do you take yourself seriously?

Expression epitomised in Rage comics following a David Silverman interview of Fox News by Bill O’Reilly 

Are we there yet?

Ever been in a long car journey packed with young kids? I remember these eight hour drives with six kids crammed into a car driving to our holiday destination. My younger brother in particular was annoying, constantly asking questions and irritating his sisters. By far the most irritating question (especially to our parents) was: Are we there yet? This line of questioning would normally begin half an hour into the journey, and would be constantly repeated throughout the journey.

This potent cocktail of repetitive questioning usually resulted in one of my parents  (usually my Dad) exploding in frustration, yelling at us all to be quiet. This was typically followed by a threat of being dumped on the side of the road. (He actually carried out on his threat once, dumping my brother, who unfazed promptly hid in some nearby field, resulting in the whole car load having to search for him.)

But it was a fair question from us kids. We had no real sense of time or distance to help gauge how far we had come and had to go. We also were totally bored with no iPods or stuff to entertain us. Singing (very Von Trapp like) took us only so far. Counting number plates helped a little. And remember, we were going on holidays, the mere thought conjured up more excitement than our poor little bodies could hold. The journey was always going to be arduous when faced with a destination the held so much promise.

As a parent myself, I have a little more sympathy for my parents. Of course, we have the luxury of allowing our kids to be immersed in some tacky iPod game, distracting them to the point they forget they are going on holidays. But I get it. I get that its really hard to explain to someone with little understanding of distance or time, how long something is going to take.

When testers ask me how we know when we are done in Exploratory Testing, I am faced with a similar challenge. How do I help a tester understand when they are done? Pointing them to the excellent list of Stopping Heuristics on Michael Bolton’s blog helps but how do you apply them? Some of them are easier than others. For example, with the “Time’s up!” heuristic, its pretty simple to apply. But take something like the Flatline Heuristic. The Flatline heuristic tells us to stop when “No matter what we do, we’re getting the same result.” But as Michael points out, there are hidden risks to this. For example, it may be that there is no new information, but it also may mean we are insufficiently explored the application in depth.

In this situation, how do we know what to do?

Like many things in testing, there is no clear cut answer to this. A considered answer requires an understanding of what’s happening around testing, and the implications of the decision being made. Hence the inclusion of the word heuristic.

I’ve found a conversation with stakeholders around stopping heuristics *before* testing starts a useful exercise. Knowing that you have a time limit on your testing goes a long way to preventing tester angst about when to stop. Include in that conversation a discussion on what ‘done’ means including into that the impossibility of complete testing. Stakeholder who get that bugs may be missed can influence which stopping heuristics you use.

Like kids in the back seat, as we test we need to repeatedly ask ourselves the question “are we done yet?”.  It may be useful to use our emotions to trigger this question. For example, if I’m bored, does that mean I’m done – or does it mean I need to change something in my testing? If I’m anxious does that mean I’m about to hit some constraint in the form of time? If I’m angry, does that mean my information is being ignored and maybe I need to address that instead of raising a tsunami of bugs. If I’m confused, does that mean I need to explore more?  Emotions like these can be useful indicators of when to ask if your done. Again Michael Bolton has done lots of work in this area.

I’ve also found  that in times of deep uncertainty its always a good idea to draw on the “phone a friend” card and ask someone more experienced than you for their opinion. There’s no shame in this, these are tough questions to answer. An outsider may have insight or additional knowledge that you’ve overlooked.

Building relationships and credibility with those around will also go a long way to helping you in situations when that significant bug is missed. And as in most of testing, articulating your process and your decision making helps to demonstrate diligence and considered testing.

Question with sprinkle of humility on the side

Michael Bolton tweeted yesterday:

Why do testers insist on trying to be influential? I suspect part of the reason is that part of a tester’s job is to recognise problems. Too often , testers see that the *real* problem is not the software itself, but the process behind the software and go into bug prevention mode. That sort of change requires influence.

Often though, we simply don’t have the authority or the influence to do make change. We may moan and tear our hair out in frustration, but at the end of the day, without the mandate to make change and the influence to implement it, there’s little we can do to change an organisations culture or process.

Trying do do so regardless, can lead to a sense of helplessness and even slowly, over time, a sense of powerlessness. I suspect most of us at one time or another have felt like this. It’s not a great position to be in.

I know I’ve been there. I tried to change the culture of a company (company no less!) that had some  negative ideas about testing and teamwork in general. (I’m talking about this at Eurostar this year). As a consultant, I probably should have known better, but I argued (to myself) that the culture was affecting the tester’s ability to perform their job. It needed to change!

We  testers are gifted with keen observation skills and the nature of our role sometimes means we get to see and recognise problems that perhaps others don’t. But lets not get away with ourselves. Without a mandate for change, we become close to the schoolyard tale tattler, dobbing in on everyone and despised by all (including often the teacher). I failed to recognise that I hadn’t the authority or the influence to make these sorts of changes. I fell into a classic consultants trap. I really should have known better.

It’s not that we *should* or *should not* ignore these problems. Often these challenges are too complex to be solved with simple answers and probably need to be dealt with on a case by case basis. But I think adopting the tone of helper (or servant) goes a long way to contributing to an answer. In some cases a question sprinkled with a little humility can be more helpful than smothering the problem with large doses of  tester sauce.

I hope I’ll remember that next time!

Yu Han on what Software Testing means to me

Last year I had the honour of teaching post graduate students the subject of software testing at the University of Technology Sydney. I asked students if they would like to write a post on testing on my blog. Yu Han did, and here it is. 

The subject of Enterprise Software Testing demonstrated the existing and interesting aspects of testing. Those lectures and the project in Suncorp are unforgettable experiences.

During the classes, I felt that, most of time, I was acting like a child, playing different kinds of games and drawing pictures with colorful pens. But at the end, there would be some serious discussions, analyses and reporting always broke my sanity.

Those reflections make me realize that my actions and thoughts were more complicated than I could realise, which encouraged me to keep reviewing what I did and exploring how I thought. By doing that, I better understood the purpose behind those activities, and I am being able to sense the essence of what a good testing could be.

I am aware of that my mind prefers to use memory or imagination to fill the gap between the reality and the information once I have received. By linking their similarities, I can better understand the information. But as soon as I came to this stage, my thinking could stop going deeper. I may feel satisfied with the simple explanation, or could be distracted by other information which will draw my attention to something else. Then I may lose the chance to find out the depth meaning of the information or even misunderstand it.

Now I am interested in paying more attention on my flow of thinking. By questioning appeared ideas could be a way to slow it down, which may clarify the understanding or identify potential obstacles hiding behind. It could also be possible to bring back those ideas or considerations which were once brought up then been omitted or developed during the incredible thinking speed. Those ideas and considerations could be questionable as soon as I try to challenge them.

This could turn out that there are missing facts behind the imagination which I feel reasonable but not actually practical. Once I try to gain the evidence to support those ideas, I could realize they are incorrect or unproved. I may need to confirm them before I go further, especially when the goal is based on such ideas. Like those assumptions made in the Big-Track exercise, our group was needed to test our ideas of what other buttons can do, before we finally test how the targeted button works. We questioned our ideas but hard to move forward until we know which ones were correct. We came up with many hypothesis, but failed to prove them at once in practice. After we had some sort of understanding, we found out that we should confirm those assumptions before we continued, which led us to come up a debugging strategy to process our tests. It appears that questioning the information not only could encourage us to seek the truth but also could inspire us to develop our ideas. In this sense, critical thinking could help one to wisely seek information and to reorganize the information into knowledge for idea development.

This now also reminds me the concept of Exploratory Testing. In the previous example, we learned the mechanism, built the tests and proved the ideas all by interacting and exploring with the actual system. It seems that Exploratory Testing could be a natural way to build and run tests while we also need to learn about the system. By understanding how it works, we could come up with how to do the test and find bugs. However, it’s difficult to tell how much time I need to finish the task.

Thanks from the experience in Suncorp, I see the efficacy of Scripted Testing. Our team only performed the testing within several hours. We didn’t need to worry about anything else, knowing other parts of the system and even the depth of the targeted section. It did take a couple of weeks for us to learn the background information and to write the strategy and test scripts, but we could save most of the time if now we are going to test another section. It makes me feel that with a good development and a clear testing goal, the job can be easily done by writing and following test scripts. But it is true that I also feel difficult to understand the system by reading documents, having meetings and even watching demonstrations. It seems easier for me to concentrate and memorize such information when I can apply it, which means learning the system by actually using it.

I am inexperienced to conclude what a good testing is or which testing method is more superior, but running the system to get reliable information, questioning the information to dig insightful connections, finding actual evidence to support the thinking seem to be the correct manners in testing.

Thanks for this subject which let me feel the enjoyment of testing, and provided a real business environment to gain practical experience. I will be happy to get involved in such field and learn more in the future.

I’ll be teaching this subject at UTS again in February next year. The course is open to students and practitioners alike .

Dear Helen, in response to your job application as a tester

I wrote this email to Helen today who emailed me a letter seeking work. I thought it pretty much summarises what Testing Times stands for. 

Dear Helen,

Thanks for contacting me through my website.

Firstly, I want to congratulate you. Not many people bother to read my website properly and send me the information I require, my estimation of you as a tester as increased a notch!

I purposefully ask testers to only send me a cover letter explaining why they want to work for me. Most testers send in a resume. 

Testing Times is not a typical software testing consultancy. Based on many years of experience, we’ve come to realise that the majority of testing performed is rubbish. Unrealistic schedules, stupid estimations, test strategies and plans that no-one reads or cares about, metrics that make no sense.

We’ve decided that enough is enough, and we will only perform quality testing. If this type of consultancy interests you read on.

We see ourselves as context driven testers (google and read up) and we believe this is one way to promote quality testing.

Be aware, we only work with truly passionate testers. This doesn’t necessarily mean lots of technical proficiency, but we expect you to have studied testing, read blogs  and have an opinion on what quality testing is.

If this sounds like you, send your resume in!

Regards

Anne-Marie

Knowing the unknown

As a teenager, I remember being struck by the poignancy of the tomb of the unknown soldier. I’m not sure which tomb it was, in which city. I don’t recall there being one in Ireland, so perhaps it was in London. I remember feeling sad and perhaps a little understanding of the horror of war crept into me. For such a simple monument, the message was powerful.

For me, the tomb itself signified a lot more than missing soldiers. Somehow it symbolises those untold stories. Who was that person, why did they die? Was it painful, did they have a wife, children? it makes me think about history too, and how really its a story told by the victor. We rarely hear the story of the defeated. So many stories untold.

Roll on many, many years and I realise that we in software development, particularly in agile, we have many untold stories that only make the light of day when we find bugs in software. We fail to hear the stories that stakeholders wanted to say, but fell aside because of time pressure. We fail to hear stories that stakeholders have not realised exist. We fail to hear stories because some stakeholders weren’t seen as needed. We failed to tell the story because we simply didn’t think it was important enough.It’s a wonder software works at all!

Testing to helps us uncover and tell these untold stories. How? Each bug we find, has a story behind it. It may be story of why the bug came into being in the first place. Or the story may turn out to be unimportant. Often not only do we uncover stories,we also end up adding meat the stories that exist.

I find this particularly true of agile. The simplistic approach to stories that start with “As a <insert oversimplified description of user here>, I want <some oversimplified goal> so that I can <insert oversimplified ambiguous reason>” . I do understand that the point of these stories is to initiate conversation, but it’s my experience that often conversations are skipped and this skeleton like sentence ends up becoming the story. And as Allister Scott pointed out these stories would totally fail the bedtime story test performed by any 5 year old. Regardless these skeleton like stories then become converted into automated acceptance tests which significantly influences the decision to release or not. Well, in my experience anyhow.

As I understand it, one of the reasons for these simplistic stories is to achieve the goal of “dealing with the problem at hand”. By focusing on only what needs to be done now, we avoid over engineering a product. Now believe me, after working on some large scale telecommunications switches in the 80’s and 90’s, I totally appreciate this sentiment. However, the drive to simplicity has I think led to deficiencies in how we model our systems.

For example, we fail to recognise that firstly, some systems cannot (and perhaps should not) be modelled from a user perspective. (I’m happy to be proven wrong in this, my experience in testing r&d products and layer3 protocols leads me to believe this is the case though). Also by focusing on only the problem at hand we fail to appreciate the subtleties and impacts of the unknown unknowns.

I’d like to see software development attempting to re-address this balance. When I went to Agile Australia lots of people were talking about systems thinking and my first response was “Brilliant!” but it seemed that the systems thinking was focused primarily on business systems, as opposed to products. Perhaps its me with too narrow a focus, but I’d really like to see more discussion on how we can become better at modelling software in agile.For one thing, lets move away from telling stories ONLY from a user perspective. There are many ways to model a system and greater diversity of models may bring about deeper appreciation and understanding of the problem were trying to solve.

And so to the unknown solider. I’m glad no-one has tried to simplify his story into a sentence beginning with “As a soldier…”.  The monument conjured up more questions than answers, questions about the unknown. Questions that can never be answered. He was the unknown solider and it was fitting. i guess one question to ask is this:  is the unknown story a fit for agile?

The title for this post is inspired by Colin Cherry’s talk at KWST3 about Johari windows. [thanks for the spelling check Srinivas]

Big Trak Robots at KWST3

I had a fantastic time at KWST3 this year. There were a lot of firsts for me. The first time I was in New Zealand, the first time I had been to KWST and the first time I met Brian Osman, plus a whole heap of other testers.

I don’t think I’ve been to such a learning event for a while. It was truly a place of inspiration, introspection and challenge. Brian and Colin have already written about their thoughts and I will add mine too in a different post, but first I want to talk about Robots. Oliver asked me to bring over by Big Trak Robots after seeing them in action at CITCON in Sydney.

I split the group into two teams and set them a testing challenge, which involved running experiments in order to determine one of the buttons. What ensued was an hour of fantastic learning, mostly for me, as I watched a group of highly skilled testers apply their minds to the exercise. The testers did some really interesting work. One team started performing fairly complicated tests, which turned out to be a real blessing for them. They then made a model of the functionality and made a hypothesis on what the solution might be.

IMG_1008

Team Two took a different tactic. They started with simple tests (a common focusing technique and one I often use) but the tests didn’t offer enough information to the testers. In the debrief that followed at the end, we had a discussion about how though simple tests are quick and easy, sometimes they fail to offer sufficient and meaningful information. Team two, recovered by defocusing and both teams offered their solution.

IMG_1010

There’s so many lessons to learn from this exercise, and depending on the group, different lessons are learned. This was one of the most enjoyable times I’ve run it, I think because the testers were skilled and could apply themselves with confidence. They also were very self-aware and so debriefing was more about letting the testers volunteer information than asking probing questions. Or maybe I’m getting better at handing over the learning to those really in charge.

Andrew Robbins and Richard Robinson are running the test lab at Tasting Lets Test and the Robots are going to have their moment in the spotlight there too! See you all there!

The engineer’s curse

Its a rare luxury this week as I have the house to my own. Perhaps those of you with partners and or younger kids will identify with this sentiment! Suddenly life seems to be easy and manageable. I don’t feel responsible for looking after other people and their problems. The challenge to manage a house, be a mother and partner, run a business, organise a conference and oh yeah, do some testing is a lot easier when its only yourself to look after!

When I took PSL last year, one of the ‘skills’ I realised I had was running around making sure everyone on the team was being heard and was ok. Another word for this is meddler. How, I thought could the team function without me, unless I was there to keep it together?

I guess after reading the first paragraph again work isn’t the only place I fall into meddling.

In a moment of retrospection, I feel my desire and ability to solve problems is sometimes more of a curse than an asset. Perhaps in my drive to solve problems, I forget something crucial. That is, often it’s not about solving a problems its about having the conversation.

Earlier this year, Michael Bolton introduced me to Marshall McLuhan and his concept that “the message is the medium”. Is it possible then, that solving a problem is not as important as the process used to solve the problem? Or in other words, the conversation you have to solve a problem, is more important than the resolution of the problem you are having?

A while back I read David Bohm’s book “On Dialogue”. In it, he encourages groups in dialogue to take the focus off decision making and instead put the emphasis on the process of communication.He encourages participants in the dialogue process to suspend judgement and be open to what emerges from dialogue. In this way a group can develop deeper meaning. For example, instead of trying to create a common terminology, common terminology becomes created.

He also talks about the challenge in problem solving. Mostly, what we call ‘problems’ are in reality paradoxes and inherently its impossible to solve such ‘problems’. Bohm believed that the dialogue process(I haven’t tried it) is a way of acknowledging and respecting differences allowing new ideas to emerge.

We all have a basic need(well I like to think we do!) to connect and understand each other. To be *human* if you will. Its this human need, at a tacit level that perhaps defines who we are, more than any high level problem solving skill dictated by our frontal lobes. As an engineer, I’m taught communication is essential to solve a problem. As a human, I’ve learned that its about taking the time to include people in your day, to actively listen regardless of the outcome.

Just a thought.