Saturday 28 July 2012

What is TDD made for? Requirements? Design? or Code?

What is Test Driven Development made for? Is it for requirements? Design? or Code?

First, TDD conveys the message – “You know it well, before you code it”. It basically encourages that the Quality is built into our software proactively by “development through testing” than reactively by “testing after development”. So, we use the testcases for constructing rather than validating.

Next, the questions that may arise are  – “How would I implement TDD cycle within a given iteration or sprint?”.
---Should I implement a Waterfall cycle within the iteration (Get all testcases ready first, then design, code and test)?
---OR should I write the code test-by-test basis, and progressively refactor the code so that the design evolves through refactoring? (Lean within lean development)

Oops, the seminar which I attended on Agile didn’t cover this? :(

It is always easy to suggest one approach or the another, but before we come to any conclusion, let us do a small exercise on TDD.

Let us say our current iteration is “implementing elevator(Lift)” for an on-going software project. Don’t ask me 'what is that project about?'. :). I’ve picked up this example as I thought it would be better to convey the concept through a real world example which almost all of us see and use day-in and day-out.

Our Architect is now kick-starting with writing the testcases with a brainstorming session. Let our Mr. Architect speak it out in his own words…
--------------------------------------------------------------------------
Pass-1:
What are the different requests and how they need to be processed?
(External Request) Elevator has to go to a floor if there is User request from that floor.
----Elevator has to move UP if the request is from an upper floor.
----Elevator has to move DOWN if the request is from a lower floor.
(Internal Request) Elevator has to go to a floor if there is User(inside elevator) request to go to that floor.
----Elevator has to move UP if the request is to an upper floor.
----Elevator has to move DOWN if the request is to a lower floor.

When does the elevator stop?
----Elevator has to STOP at a floor if there is an external request or internal request to stop at that floor?

What happens when the elevator stops at a floor?
---- Open the Door
---- Update the internal request as DONE or DISCARD.
---- Allow time for passenger to Get-In and Get-Out
---- Handle max-load checking, if any
---- Close the door
---- Update the external request as DONE or DISCARD.
---- (Well, I am not writing all other features (light, fan, displays etc.) of our modern elevators as we are not really going to write the software here)

...(Okay, my requirements are evolving well. TDD is proving good, thanks to my Agile coach.)...

ID-10075634
Then, one junior engineer in my team has brought up a question:

What if the Elevator is going down and there is a request from a floor(on its route) to go up? Should the elevator stop to board the User on its way-down or should it stop on its way-up?”.

(This junior has joined the industry hardly an year back and doesn’t carry a “Software Architect” title as I do, but has brought up very good design point. Does it what “Wisdom of the Group” mean?)

Well, the answer is “Let’s check with the customer on what he wants but there is a need to have a mechanism to decide whether to stop or not on the way and we would take this as a Black Box (a decision-making method) and it would be handy from code maintenance point of view too even if the requirement changes in future”.

ID-10057522Then, someone else, handling another feature, has raised another question:

What if there are requests from upper floor and lower floors at the same? How do we decide the direction the elevator has to move?”

I seriously envy she has asked this question but I have started building up a few more testcases on top of what she has asked and 'it is only getting better'…
  • ID-10081549----If both the requests are to move UP
  • ----If both of them are to move DOWN
  • ----If one of them is to move UP and one of them is to move DOWN
  • ----Is there a Least Travel requirement to be handled?…
Well, this has also ended up as a TODO for us but we have added very good set of testcases coming up in the form of “executable specifications”, and what do good “executable specifications” do? They feed to the design and that’s what has happened now – we need another Black Box((a decision-making method) to decide whether to move up or move down. Also, we have identified the parameters(variables) that these decision-making methods need.

Along with this, we have identified different actions (Just execute them – moveUp, moveDown, Open, Close). Also, we have, almost, identified the Data Structures to store the User Requests (Internal and External). Overall, our skeleton code is pretty much visible now.

Of course, there are still a few questions to be answered but they are not impediments in that we still have the  job to do for the next few days before the questions are answered. We don’t need to blame someone that we are awaiting answers to start coding. We have “blanks” (TODO) to fill in our design and we knew where the blanks exactly are.
--------------------------------------------------------------------------

Now, let’s get back to those questions that we had earlier:

---Should we implement a Waterfall cycle within the iteration?
----Even if the scope is an iteration, practically we cannot get all the testcases right or wait to get them right to start with your design and code. First let us organize our testcases properly and design would be discovered automatically.
----Thus, it also helps proper work distribution if multiple people are involved in the implementation.
----Random organizaion of testcases wouldn't help a good design and proper work distribution.

---OR should we write the code test-by-test basis and progressively refactor the code so that the design evolves through refactoring?
----Looks like, one-size-does-not-fit-all. It may still work, but there would be a lot of code-refactoring, test-and-retest cycles and they consume a lot of time.
----Also, remember that Automated Unit tests are not possible for all scenarios of software development and let us not ignore this fact.
----Refactoring doesn’t just mean abstracting the reusable code and thus evolving the design with a test-by-test approach. Let us not make it too lean in the name of lean development.
----If a few minutes of 'collective thinking' could save hours together at later point of time, then do that NOW.

So, let us go with the facts (than what was taught) and use a right mix of waterfall and test-by-test models as per the context.

Note: If you observe the elevator functionality derived from these testcases, it can be easily seen that the Elevator works like a State Machine (OOAD design). But, the objective of this article is not to explain any particular design pattern. Maybe, we would take it some other time.


Are your testcases well-organized for a good TDD implementation?

Are they proven good for an organized task distribution? Don't forget this project management issue.

Is your TDD approach feeding enough for a good design?

Are you ensuring that you are implementing the 'team collaboration' and 'wisdom of the group' in a right way for well-organized testcases?

Do you have an eye for minimizing the rework by spending the right efforts at the right time?

(Attribution: Images on this post are downloaded from FreeDigitalPhotos.Net)

Wednesday 25 July 2012

Don't ignore those Exceptions - they will come back to you


“We did an exceptionally awesome job on the design through our Agile and TDD practices which our company had recently adapted to. It evolved exceptionally well through our iterative and incremental processes. With a lot of negative testing we ensured our software was robust. A few weeks after the production deployment, the customer started reporting a good number of errors with a variety of exceptions and stack traces one after another and the maintenance phase became a nightmare – support engineers spending a lot of time in debugging the issues and the development team delivering the patches with quick or partial fixes and we almost got into a never-ending report-and-fix cycle.”

Stories like this are not uncommon in the IT services industry. Yes, poor exception handling will prove very costly to you. Good enough to force your customer not to extend the contract with you. You might have done a cool job with your TDD and negative testing to ensure the robustness of your software. But exceptions are what they are – exceptions - and they need a careful attention to be handled properly in your software.

Here are some pitfalls (the "Why" part of it) that some of us usually perform about Exception Handling.


--We tend to be reactive than proactive in that we don’t handle the exceptions properly unless some serious error is reported. That too we limit the bug fix only to the particular error reported. 

--‘Satisfy the Compiler’ approach – We think we are good as long as we satisfy the compiler.

--‘These exceptions are hard to occur’ approach – We think certain exceptions are hard to occur by assuming all ideal production environments.

--Misconception that Exception Handling is a Developer tool used for debugging purposes for tracing the root cause - Yes, it is, but it is not limited to. 

--‘Swallowing Exceptions’ - We do nothing more than printing a Stack Trace or an Error Message. Exception Handling is not just Exception Logging. 

--'I deal with my code only & I only deal with my code' approach - Not understanding the BIG picture where a module/method fits in the application flow, we handle an exception where it need not be OR we don't handle an exception where it needs to be.


(I am standing up here to admit - "Yes, I did those mistakes too".)



Not that it has to do with the developers’ lack of technical knowledge on the subject. As developers, we would, in general, know the technical facts of Exception Handling but we, sometimes, don't ask these very important questions to their implementation:
  


--What do these Exceptions mean to the Software or the Customer’s business or the Workflow of the Application? [Beyond the development cycle]

--What has to be done if these Exceptions occur in the production?

----Do they need to be recovered?
------If so, how can they be recovered? And what's next? Is it "Fail-Safe" or "Control Change"?


----Do they NOT to be recovered in the software? If so, what’s the action expected from the user?
------Can they be safely ignored and printed to the Application Log? If so, which Application Log?
------If the user has to take appropriate action, what is the meaningful information to be printed to the Log? And again, which Log in this case (Different Logs have different meanings to the business users)?

--Is enough attention being paid to using Asserts ("Fail-Safe" & "Prevention than Cure")?

-- Is enough attention given to Testing the Exception Handling? (Also, not all the Exceptions can be produced or reproduced in a Testing or Staging environment. But the application design shouldn’t ignore this.)


  So, let us expect and respect those exceptions:

ü    We need to implement Exception Handling hand-in-hand with the Application Design. Just like our Application design, Exception Handling design needs to evolve too and in fact it is an integrated aspect rather than an add-on aspect of the Application Design.
ü   We need to analyze the Exception flow with an assumption that they have happened, just like we do it for general Application Design. In a layman terminology, we can call it a “pessimistic design” to serve its purpose right.

Well and good, if your team is already composed of design gurus and seasoned architects who are already doing this job to the perfection. But if it is not the case, it is definitely something that your team has to do some exercise on – and there will be so many things that will fall in place.

Having said that, building software with 100% robustness with respect to Exceptions, is something that is, in reality, very difficult to realize. Maybe it would happen over a long period of maitenance cycle. What we must try is to respect the Exceptions and improve the robustness of the software and get as much close as possible to the perfection.





Does your team understand what do those Exceptions mean after the development cycle?
Are your team members in sync with the Exception Handling mechanism? Is there a consistency in handling Exceptions?

Does your Test Planning have a place for testing Exceptions?

Have you collaborated with the business side (customer) on handling these unusual conditions? Is your implementation in sync with their expectations?


Note: Though I have tried to take a language-neutral and technology-neutral stand here to highlight the importance of Exception Handling, I have been a little biased towards the User level application development. And if we talk about the System level programming, it is again a total different subject to handle those traps, page faults, invalid memory references etc. The closer we get to those 0s and 1s in our programming, the more are the demands for the robustness - goes with the fact “It doesn’t matter how strong your walls are when your foundation is weak”. (I have not done much of system programming as a matter of fact.)


"A pessimistic approach comes of greater importance in designing software that is highly robust"

(Attribution: Images on this post are downloaded from FreeDigitalPhotos.Net)

Wednesday 18 July 2012

Project Tracking Tools in Agile

This is a big question that most of us come across once we start dealing with "How to Implement Agile". Do we really need tools that are made for Agile? Is it about the principles or is it about the tools?

First off, Agile practices emphasize more on "Getting the work done" over "Following Process" - so it looks like no tool is required for the task management and whiteboard and story cards kind of approach is good enough.

On the other hand, you need to answer very basic questions at any given point of time, to anyone from any location.
- How many tasks (Backlog) are pending?
- Who is working on what?
- How many bugs need to fixed?
- What are the tasks being worked for this iteration?

Assume these questions are being asked from a project stakeholder from a different site; we can't show him the Task Whiteboard or send a snapshot of that on a daily basis. It is impractical.

So, maintaining the balance - "Not deviating from the Agile principles" and at the same time "Respecting the interests of all the stake holders including the execution team" - we need a tool which has a
simple "search", "categorization" and "prioritization" capabilities.


Here is an example of what we used to do for project tracking purposes:

-- Someone from the team used to make a TODO list from our daily meetings or Sprint meetings. He need not be the Scrum Master and we actually rotated this job among the team members.

-- Rotating this job also helped in creating more "participation" and "sense of ownership" among the team and the team used to get more understanding of tasks beyond their ownership.

-- After the meeting, the person who made the TODO list, would create Tasks and Bugs on a Bug Management tool (We used Bugzilla). Tasks are the new development (Bugzilla has support for Tasks) work. The Tasks and Bugs on Bugzilla looked like this.
-----------[Task] [Login][Implement Basic Login]
-----------[Task] [Login][Implement Forgot Password]
-----------[P1] [Registration][Fix UserId validations]
-----------[P2] [...][............................]
[You may think of other practices like using user-derived standard prefixes for a bug's subject line, use of Keywords etc. for categorizing the work based on Functionality or Iteration basis. The idea is to find out what is already available and how you can make use of that to fit in your requirement.]

-- This would not take more than a few minutes and the SCRUM master (or the Project Manager, whatever the title is) also used to sit with the engineer who is creating the task to guide them better and it also used to help them as "undocumented" mentoring process.

-- We used to assign a default priority P5 for all the Bugs (including tasks) and assign P1 for the Tasks/Bugs for this iteration. Also, P2 priority was used for those tasks/bugs that were planned to be handled in the current iteration but could be pushed to the next iteration should the time did not permit. This was for handling the priority within the iterations.

-- This helped everyone being in sync with the task management. Also, there used to be minimal duplication of work - like using some tool for the Project Management and some other tool for the Bug Management and manually transferring information from one to another.

-- Even for the project stake holders from different sites as well, they can simply run the "Saved Queries" to get the information they need. Remember you need to have different "Saved Queries" to serve the purpose of different stake holders and it is one time job. Some of the "Saved Queries" that I can list from the top of my head:
--- Tasks/Bugs for the Current Iteration
--- Tasks/Bugs for the Previous Iterations (Copy & Paste job with the Iteration name being the variable)
--- PENDING: Backlog (Same as above but no Iteration tag/keyword assigned)
--- DONE: Overall Tasks/Bugs already closed
--- OPEN: Tasks/Bugs being worked on
--- <Any other queries that we used on a routine basis would go here>

-- The same queries were used even on the project Wiki pages as pointers.

-- It also helped any quick review meetings at any time and from any place.

-- Beyond that it worked as a good engineering practice to integrate SCM tool (for Bug Reviews and Tracking Regressions etc.) as well because you are tying up a Bug Number. (We integrated Bugzilla with SCM).


This was a "good enough" and "a derived" approach for us with the Questions we needed to answer in the project management. Again, this was the solution that worked for the requirements we had and mention of Bugzilla here is just to state an example. What we need to take into account is - thinking beyond the bookish approach and having a simple process in place to answer the basic Task Management requirements for "serving the individual stakeholder requirements" and at the same time "ensuring the sustained development".

"No Process" and "Over Process" both are equally dangerous. Improvising your processes does not mean doing everything new and deploying new tools.




Have you charted down your project tracking requirements (what exactly you want) before zeroing down on any tool?

Has your team been involved in evaluating the project tracking requirements? Are you ensuring that the 'team collaboration' principle is used in developing the project tracking process OR reviewing the existing process whatever the case maybe?

Have you evaluated if the existing in-house tools can be tailored to meet your requirements before investing your time and money on a wholesale new tool?

Is your project tracking tool or methodology adding to the "Simplicity" principle of Agile than adding more "Complexity"?

Overall, Are you inviting the feedback of the team to ensure this process also evolves better (iteration by iteration) as you progress? (Just like we talk BIG about evolutionary requirements and evolutionary design)



"Simplicity of Agile should not be lost in the complexity of Tools"


(Attribution: Images on this post are downloaded from FreeDigitalPhotos.Net)

Monday 16 July 2012

TDD is a Control System

Apart from executing Test Driven Development as a "Test First, Test Early" approach, it can be seen as a very basic Control System. So, it is not just about the "when" part (developing the testcases before writing the code) but it is also about the "what" and "how" part of it.

- What is the input you are considering for developing the testcases? Is it just the feature under development or is there something else?
- How are you implementing the iterative model for a progressive development towards Quality by constantly improvising the Quality of your testcases in the project cycle.

What is it about?

Let us say you are developing the testcases for your current iteration (or what you may formally call as a sprint in Agile terminology). Then, how about these feedbacks which go into your TDD model?

- First Level feedback from the internal testing done by the Team
- Customer Feedback for the Current Sprint
- Customer Feedback from the previous Sprint
- Customer Feedback from the Future Sprint


The first 2 items need no explanation and if I write something about them here, it would be just to fill up some space on this post. The last 2 items need a little mention as they are sometimes missed out in the project execution.


How about a small example here?

Let us say that you had delivered Login feature to the customer in your previous iteration and the customer reported a couple of errors around the error handling mechanism in the Login module. Okay, what a big deal and you fixed them more quicker than the customer reported. :) Cool.

Again the customer kept on reporting the similar issues with the other features as well. Well, this is definitely not a good news for both the customer and you, right?


- Customer Feedback from the previous Sprint

--- How the customer is trying to use the application?
--- What's that he is expecting from the application - functional, non-functional, error-handling, usability etc.
--- What are the lessons that you need to consider for the future iterations that would possibly have an impact on the design, testing coverage etc.

We used to call this as "Bug Analysis or Generic Analysis". It doesn't matter what name you give to it and whether you do it in the sprint review meeting or some other informal meeting. What matters is ensuring the "generic lessons are communicated to the team". Suppose in the example above, if the error handling issue is communicated to the respective developer only and another developer does the same mistake in another piece of code again, the purpose of TDD (and the Iterative and Incremental development) can be seen as falling short.


- Customer Feedback from the Future Sprint

Assume, in the example above, you had delivered a Registration feature before Login feature. But the error handling issue was reported on the Login module which you delivered now. The issue is valid for the Registration module as well and it is just that the customer had not caught that. Does that mean, you don't need to fix it in the Registration module because it was not reported there? Obviously, No.

This is what I mean by "Feedback from Future Cycle". This would fix the errors before they are reported. 'Bug is not reported' doesn't mean 'bug is not present'. Just like we do, customers also tend to report different types of errors at different times during project cycle. So, "Don't change the working code" can not applied literally here.



Are you ensuring that the team participates actively in the generic feedback analysis and shares that knowledge than treating it as a mere 'Bug Fix and Verification' cycle?

How useful was the feedback on the current iteration for the Quality of the iteration and for the overall Quality of the project itself?

How better is your TDD coverage in this iteration than the last iteration (Going- forward)?

Are you ensuring that a quick revisit to the previous iterations being done based on the feedback from the current iteration (Going-backward)?


"An effective analysis of a bug reported can help you fix multiple bugs in your software before they are reported"

(Attribution: Images downloaded from FreeDigitalPhotos.Net)

Tuesday 10 July 2012

Business principles are not tightly coupled to any industry





As you have probably started thinking, the poster is not from any IT office. This is a poster I've found in a restaurant that I've recently been to. Well, I'm not going to write anything here explaining those principles. You are the boss of your own and you are the best person to explain those principles to yourself on the grounds of "What makes sense to you" and "What adds value to you and your team" at work.



"Values of an organization should reflect strongly in its processes"


Understanding Test Driven Development

One of my friends had told me that they were implementing Agile practices and Test Driven Development in their organization. When asked “What is Test Driven Development”, he answered “the testcases are developed first and the software is written to the testcases”. He also explained that it helps catch the errors sooner than later. This is true. But he couldn’t explain anything more than the DNA of the Test Driven Development as to what were the other benefits that you can reap of the Test Driven Development (TDD) if you implement it in a right way.

 
Well, let’s not do something just for the sake of it.


It is perfectly true that the TDD model helps you catch the errors sooner than later. But more than that, TDD offers the implementation of “microscopic” approach in the project implementation through Incremental and Iterative cycles.



First, what is it all about testcases?
  • Test Cases are direct means of communication among the developers, users and other stakeholders to understand the system in correct and in comprehension and arrive at a common and formal page of contract.
  • It is the language where the stakeholders talk to one another at a lower level of implementation.
  • Test Cases are nothing but “executable specifications” of the system that you are developing. A good set of testcases is nothing but “working code in plain English”. Did we get it right before jumping on to TDD?

Theoratically, there are different models practised for the implementation of TDD.
-----------------------------------------------------------------------------------------
  • In a pure and religious Agile model, the developer owns the responsibilities of test case development and the execution, along with the developing the code to those testcases.
  • In other models, there is still a dedicated Testing team and the Testing team develops the testcases in parallel with the code constuction.
-----------------------------------------------------------------------------------------

Irrespective of the model your team is following, you need to ensure you are developing the comprehensive (Test for Good & Test for Bad; Functional & Non-Functional) set of testcases before jumpstarting to coding your iteration or sprint or feature - whatever the TDD in picture is for. Otherwise, TDD comes of no additional value in your project.
 
To illustrate with an example from my experience, we used to implement TDD as follows:
 -----------------------------------------------------------------------------------------
  • Depending on the team composition, team skills or the interest of the individuals (which you need to respect), the team members used to play varying roles.
    • Development-Only
    • Testing-Only
    • Both Development and Testing
  • There were no hard and fast rules on the above team composition but the team would do it through "collaborative planning". Yes, "team collaboration" should be applied in planning phases as well, as a side note.
  • Developers used to wear those Testing Hats in the development of the critical pieces of the software. This could be from business requirements point of view or technical implementation point of view.
  • The Test Engineer or the TDD-Developer will come with his first set of estcases and call for a "review" meeting with other team members.
  • And believe me - these review meetings are the places where a lot of brainstroming used to be done and people get into a lot of interesting discussions and raise a lot of questions and get answers. This also proved to be a very effective informal platform for the Knowledge Transition.
  • Interestingly the "take away" items from these TDD meetings used to be:
    • Expected Results answered (DONE)
    • Expected Results unanswered (TODO item for the Developer, TODO for the Test Engineer, or a QUESTION to the customer)
    • New Testcases developed
    • New Testcases to be developed
    • Overall an "executable specifications" coming into a shape on a "collaborative platform".
    • Everyone understanding of the system a little more and a little better.
    • ...
  • The developer would now start with the code and possibly with those TODO comments which will need to be answered shortly.
-----------------------------------------------------------------------------------------

Refer to my article "TDD is a Control System" to understand a few more practical facts of paying continuous attention to improve the Quality of TDD beyond an iteration cycle.



If you a Project Manager:
  • What is that you are doing to ensure your team is practising the right TDD principles than mere "Test First, Test Early" approach?
  • How are you ensuring the collaboration of the individuals that the "Wisdom of the group" is used for building up the "Executable Specifications" or formally "Testcases"?
If you are a Developer or Test Engineer:
  • Did you ensure you've got the testcases "correct and complete" before constructing the code to that list? Rememeber TDD is all about minimizing the rework and refactoring.
  • Are you ensuring you are communicating to the Test Engineer beyond formal meetings?
  • Did you get those unanswered questions in your TODO list and in your TODO comments in the code before they end up as a BUG reported by someone else at a later stage?
  
"Right Practice is what you need to ensure Quality, name of the Process makes no difference" 


(Attribution: Images on this post are downloaded from FreeDigitalPhotos.Net)

Speaking the Customer Language

Well, before you misinterpret the subject line, it is not about whether to speak English or Hindi J. It is about the format of our communication in implementing the user requirements.

Let me try to illustrate this with an example from my own experience.

A few years back, I was part of a project in Banking domain. The solution that we were implementing dealt with different departments of a major bank (like Call Center, Core Banking & Credit Card department etc). I don’t know how many levels of communication and the information transition was happening from the stage customer providing his requirements to the stage the requirements were ending as Java code, but at the level of module leaders and individual developers, we used to communicate in a language like this:

ü There are n number of screens in the front-end modulerepresenting different ypes of end-user requests.

ü The X screen on the Call Center module will have so and so fields and some particular fields are numeric whereas some are alphanumeric etc. (There were some fields which we didn’t even know what they stand for).

ü Upon hitting the Submit button, the back-end will translate the data in a pre-defined XML format and send it to Core Banking module or Credit Card module. Some of the responses for these requests are synchronous and some of them are asynchronous. (We didn’t know why most of these responses has to be synchronous or asynchronous)

ü

Well, our team really worked hard and implemented the solution ‘to the requirements that we understood’. The design was good ‘to the requirements that we were given’. The testing was carried out ‘to the technical facts interpreted’ and we evaluated the solution against the ‘technical language’ that we had been speaking thus far.
It was the time for the System Integration Testing (SIT) where we went to the customer site for integrating our solution with the customer’s existing IT infrastructure. Surprisingly, it was not just the integration issues that we dealt there but major portion of them were pure Business Functionality issues that we had not paid any attention to in understanding. We didn’t understand them right and we didn’t speak them right when we implemented our solution. At the customer site, we were working along with a couple of IT engineers and Business Analysts. The business analysts cared the least about our Java or XML or how many modules that our solution was made up of. They were worried about the provision to have different business requests processed in the system (for them each input form on your front-end was a business case and was a real-world scenario) and when the processing was not successful, they had to be informed through the system with proper user-readable messages/alerts. Business SLAs, if any, were to be considered too. These are just a few examples to highlight what we had missed in our implementation.
Seriously, that SIT phase was the time we worked closely with the customer and understood most of the requirements – the real business requirements this time around; we also understood what our Java Exceptions mean to their business and what do with different Exceptions and also started speaking the customer’s language. We had to rewrite good part of business functionality but end of the day, as an individual, I learnt the hard lesson forever, that is “speak customer language”  before you speak “technical language”. Had we done it in the beginning itself, we would have saved a lot of time altogether.
Remember that every customer would love to speak to more of their business terminology than your technical terminology. For example, ‘Call Center department will send x request to the Core Banking department’ would sound better to them than ‘Call Center module will send x message to the Core Banking module’.



-------------------------------------------------------------------------------------------------------------------------------------
Has your team understood the business problem well before you jump start with design and code? If you don’t know the customer’s business already, work with the customer in understanding the same. Openness is very important here.
Are you asking the right questions to the customer to understand his business? This also helps the customer understand and have a measure of what your team know and what they don’t know and arrange for any training material, if any or take any other appropriate steps to educate your team more.
Is your team discussing the business case scenarios in the design or review meetings OR are they always discussing the technical problems?
Are your incremental deliveries being evaluated by the right team from the customer side?
Does your team know who the end users of the solution would be and adding the "customer perspective" in the construction phase?
-------------------------------------------------------------------------------------------------------------------------------------

"Software contruction isn't just about solving a set of technical problems but it is about solving business problems through technology"

(Attribution: Images on this post are downloaded from FreeDigitalPhotos.Net)