Both Agile and DevOps have brought about major improvements in the efficiency of software development. But are teams supposed to get to this improved level of efficiency? Steve Naidamast looks at metrics and the problems of estimates in an agile environment.
Despite some of the gains in efficiency in both areas there is still little discussion on how one gets to efficient application development in the first place. In fact, there have been very few documents written on how one estimates and schedules new and ongoing development efforts within the criteria that the Agile paradigm promotes; though there may be actual techniques involved, little is portrayed about them. However, in lieu of this articles are beginning to appear that are detailing the ongoing issues even with this very popular process paradigm.
One such article, which recently takes a high level look at these issues can be found at the online edition of Forbes. The article’s author states right up front that Agile is continuing to have issues beyond the basic task and small-project levels as follows…
“Nevertheless, the problems and limitations of Scrum and the broader Agile movement have proven surprisingly persistent, in spite of the throngs of very smart people who have worked in various capacities with Agile over the years.Interestingly enough, it suggests that not enough is being done to correct these issues, which could boost the Agile paradigm to next level, which is interpreted as a mixture of the Agile and DevOps paradigms. However, if one analyzes this prospect it may do more actual harm than good if you intend to build a new paradigm based on one that is beginning to show significant limitations.
The numerous complaints about Agile include its lack of focus on software architecture, its emphasis on one-off software projects as opposed to building reusable code, and the reinforcement of the notion that the software development team is a self-contained group, as opposed to participants in a broader collaborative effort.”
One of these limitations is that the Agile paradigm has made a serious but successful attempt at removing or limiting the reflective aspects of software development. One such aspect is that of project estimates and subsequent scheduling.
Agile and the problem with estimates
Due to the large numbers of organizations that have refused to understand and implement quality project estimate analysis, which actually follows engineering paradigms, companies have floundered consistently in their attempt to implement software projects on time and within budget, that satisfy user expectations. If you remove or ignore such a vital aspect in software engineering you cannot possibly label what you are doing as software engineering as Yahoo has recently done by eliminating another such vital aspect to the engineering process, that of Quality Control.Though the claimed results in that organization are significant from such a function being moved into the development arena there are two issues here, which have not been explained in this recent announcement (see here). One, there is no mention of the types of projects that Yahoo is concentrating on to make such statements credible or questionable. Two, there is no evidence that by making such a move developers can exceed the maximum percentage in their ability to unearth defects, which has been thoroughly researched to be a consistent approximation of 60%.
Microsoft has followed a similar path and now reports are surfacing describing serious failures in their own product development efforts that are yielding poorly, tested deliverables.
There is a reason why software engineering principals have always maintained a compartmentalized but consistent approach to software project development; it is the way a creative process can be moved into the engineering realm.
Returning to the focus of this paper, it is a fallacy to believe that such an endeavor as project estimation analysis cannot be done accurately. In fact, this fallacy has been completely disproven years ago. However, many IT organizations, whether they have invested in Agile or not still go on the assumption that project estimates can be gauged simply by reviewing prior projects or pulled out of a hat like any magic trick. And though this may be successful with small tasks such as maintenance or even small projects, no such results can be used accurately to gauge how long it may take to do a complex endeavour no matter how small or large.
First and foremost, organizations practically across the board have failed to realize an initial project estimate is just that, an estimate. It’s a guess and in some cases a best guess but still a guess, nonetheless. However, many IT organizations still use such “guesses” as actual target development times, begin implementing their projects on such a basis and then find the project goes over budget, has too many defects, implements specifications that were either erroneous, misunderstood or poorly transposed to the development staff, or some combination of the three, any and all of which will lead a project into the software engineering definition of a failed project. Surprisingly, there have been authors writing about “continuous integration” who accept the fact that defects will enter into the production implementation forgetting that every time one does it is very costly to correct at that point no matter how many times you re-implement code containing corrections under such a paradigm.
Despite the constant number of project failures in the United States (which are often reported generally on a statistical basis per yearly), Information Technology as a profession has yet to understand that software development cannot be commoditized as it has through the use of the Agile process paradigm, despite the huge efforts at doing so. In many respects such a commoditization has had the additional effect of adding to ongoing project failure as professionals still attempt to reduce vital functionality of the development process to sound-bites.
Much of the problem with all this is the question as to exactly how fast Human beings, even young enthusiastic ones, can produce well-crafted code, with near zero defects while also having to constantly keep up with a changing landscape that consistently threatens to make such personnel obsolete as quickly? Such stresses combined with the breakdown in compartmentalized development processes make such a pace of development either unsustainable over the long term and recent reporting of developer burnout appears to corroborate this. Proper project estimation has been one of the first processes to have suffered in the event.
In true software engineering as well as in the reality of software development the axiom that “we slow things down to speed them up”, as paradoxical as it sounds, actually has been proven over time to be the only credible technique for not only developing quality software on time and within budget but determining accurate scheduling of such endeavors. Instead of taking initial estimates and then refining them over time into substantial, actual target dates, many times such initial estimates are simply used as actuals, which no true engineer could subscribe to.
It is the concept of the evolutionary estimate process whereby practical software engineering appears so counter-intuitive to the hyper hyped up landscape of the software development profession. With software engineering, taking a reflective look at how a project is to be successfully completed, it appears to be simply a waste of time that could be put to better use by getting something else done.
No one would want an aircraft built using the Agile paradigm given its limitations. No matter, many developers in the business application development part of our profession would loudly contend that none of us are building aircraft. To prove this point the US Armed Forces have consistently shown poor engineering standards with the number of failed weapon systems military contractors have produced in recent years. It is all part of the same milieu.
Slowing things down to speed them up actually means taking the time necessary to plan and understand the task at hand so the shortest amount of time possible is allotted to the completion of the endeavor. This doesn’t appear to make much sense to many developers but it is especially true for technical managers. However, in fact this axiom has been shown to be categorically correct. What the “shortest amount of time” for an endeavor means is that given all the factors for a project (ie: staffing, tools, modules to be developed, risk) a correctly analyzed estimate will yield the shortest amount of time it is “physically” possible for any given team to complete a project that meet all of the software engineering standards for production work.
It starts with the premise that every project is unique and subsequently each project has a specific, minimal period of time that can produce high-quality results. Go below this minimum and the project will be rushed with added stresses on the development staff that will yield defects. Go over this time and more often than not project budgets will increase as a result of poor planning and most likely the additional bane of feature creep which will increase the level of defects since there will be more to develop than originally intended.
The idea behind the “evolutionary estimate” is that at the beginning of each project no one really knows how long the project could actually take to produce quality results. However, most often good technical managers will have a general idea based on their experience with their team and the projects that have been successfully accomplished in the past. Thus, the practice of providing a very general estimate that is most often padded with a calculated buffer of time is often a good practice to begin with as long as the people involved understand the capabilities of the team and its past performance.
However, in many instances this practice quickly takes such estimates and promotes them into actual target dates as a result of business pressures. And when this happens, the project most often begins its descent into the “failed” category.
What estimates actually look like
When providing an initial estimate, no matter how good one may be at doing it, it will always be off by a large factor at the initiation of any project. This is simply because for an estimate to be accurate it requires information; something that is always lacking at the start of any new project. You may have some basic information and have a basic understanding of the general performance of the team doing the project but primarily, project specific information is usually not available at this point.Specific project information is that data which details how fast accurate, low defect results can be achieved given the project requirements. You can begin a project with some basic assumptions as to the time required for certain tasks such as data access development based on existing software that may be reused. And such information can be used to begin an analysis of the general period of time required for project completion but it is still fairly generalized data. As each project is unique unto itself, project specific information can only be obtained once the actual coding effort has begun and as it proceeds, even if you have on hand prior metric data for similar, previous projects. One still has to follow a standard “evolutionary estimation” process for each project in order to refine existing metric data or begin recording new data for your organization. To standardize such a process also encourages business partners to view your work from a professional standpoint and when successful will be more willing to accept such a process for all such development efforts.
Therefore, one has to view an “evolutionary estimate” as a process as depicted in the graph below…
This graph demonstrates how difficult initial estimates for a project can be no matter how qualified the person making them. What this graph shows is that initial estimates are usually off (either high or low) by a factor difference of 16. Thus, even when all of the requirements analysis has been completed, any such estimate can only be at the most 50% accurate.
When management is pushing for schedule commitments and estimates that can in no way be based on anything realistic then it is up to the project leader to parry such efforts by offering wide margins of latitudes with best and worst-case scenarios. Even then it can be difficult for a good technical manager to convince his or her business users to refrain from using such initial estimates as expected completion times. And there is a very specific reason for this ongoing dilemma in business.
Despite all the talk and hype about business organization wanting high quality work from their technical teams and other corporate areas it is nothing more than lip-service. The majority of businesses, though they talk a good game, really only want “mediocrity” that will at least satisfy their immediate needs. This has been sociologically proven by analysts of US business over time. Even the newer startups incur the same cultural impediments since the people funding the new company often want to see results quickly not understanding that the idea of “quickly” is often relative to quality software engineering practices.
Thus, good technical managers have to learn effective negotiation skills that allow them to promote quality software engineering practices to management while at the same time being able to convince
them that such practices will yield the speedy results they often clamor for. This too is often beyond the scope or even the abilities of many technical managers leaving the professional software development profession with few really good technical managers who have good business partners.
So the idea that the new process development paradigms promote; that we can all work together, is more or less fantasy as business culture has never changed and in the United States it is getting progressively worse if anyone keeps up on the continuous studies of US business institutions.
Nonetheless, it is at this point where a majority of software development managers often begin to self-inflict destructive wounds on their own projects. It is at this point such managers believe it is the “better part of valor” to commit to what their own supervisors are expecting even when most often it is simply not “physically” possible. If a manager somehow manages to make an unrealistic deadline he or she is lauded for the attempt. But if they fail as they often do, then their position in the company could be placed in jeopardy. It’s a no-win proposition so why even attempt it and ignore credible planning processes?
As should also be noted from the graph above is that, assuming that proper project data is being correlated as the project proceeds, in every quarterly phase of the project the original estimate can be refined until the last quarter where the project should then have enough data to develop an actual completion date. Anything prior to this period is just an increasingly accurate estimate. Both the estimates and the actual should fall into the range of time that was initially expected for completion and most often it will be the early part of that range. For example, if a project is expected to begin in January and the initial estimate predicts a completion period of sometime in the following summer, do not be surprised if following the “evolutionary estimate” model if the project actually concludes successfully sometime in June.
This process of estimation has been proven categorically over many years of research against software development efforts. And despite the size of any project, proper monitoring of project data as it becomes available is crucial to being able to develop increasingly accurate estimates and then finally, an actual target date of completion. The only way to do this is to track the developing metrics of a project.
What are metrics?
One of the primarily characteristics of Agile is that it promotes the idea that if we all just work well together, while granulating the required tasks into small chunks of work, productivity will increase while production implementations will be more rapidly made. At least that is a conclusion one could come to when seeing the many popularized articles on the subject. However, the issue comes down to how well any particular team is doing against a project’s expectations. And this cannot be measured without some form of statistical base. The inclusion of metrics is the way that this is accomplished.Metrics are then tracked for not only the team but for each team member; the latter to eventually ensure that each individual has the right conditions to produce their best possible output. Thus, if an individual is given a task that is determined to take 3 days but lands up producing an inordinate amount of defects in the end result within that time, he or she may be attempting to rush their development without the appropriate unit-testing. Given 5 days and the same person produces an end result with nearly zero defects the team now has a metric for that individual for a specific set of task criteria, in which it knows for such a task that particular individual will require approximately 5 days to complete with practically no defects. Now apply the same tracking standard for those specific criteria for all team members and the team can now generate an average length of time required for any one team member to complete such a task.
Such a process, though somewhat generalized in the example provided, is how any IT team can track its projects and generate statistical information that can then be used to refine original estimates into more refined outcomes. The additional benefit is that the project manager will have ongoing data with which to refine the original estimate. If quality results are taking longer to produce than he or she will refine the estimate to extend it; if taking less time, the refined estimate can be contracted. However, such refinement must also include a clear understanding of the data across a variety of tasks with different criteria that primarily define the nature of the project. As this data is accumulated over time, the estimate can then be refined more accurately.
In the example above it was demonstrated that the specific individual who took 5 takes to complete a task of specific criteria with practically a zero-based defect rate was in fact the fastest that that particular individual could work and yield high quality output. This should in no way be taken as some level of criticism towards that individual since it is what it is. Different people work at different speeds to produce quality results.
The idea is not to attempt to get everyone to work their fastest towards some perceived maximum level of speed but instead to understand how fast a team can function given certain criteria where it can produce high quality results. The speed indicator is a by-product of this understanding, not the goal as some teams will be faster, while some slower. The metrics of such tracked data is what provides the indicators as to what is the shortest period of time a particular team will take to complete a specific project with a given set of criteria. Forcing a team to work faster than the given data provides will only cause the project’s budget to increase as more time will be required to correct defects that will be increasingly costly as they are found later in the project endeavor.
Before setting to track such data another factor has to be considered, this is “risk”. The example above describes a fairly clean situation, which is not realistic. All projects and inherently the individual tasks that make up a project all operate under factors of “risk”. “Risk”, as was discussed in a previous article in this series, is any factor that could impede the quality completion of a task and subsequently the overall project. For example, if the individual in the scenario above is known to be a fast and efficient worker it can then be questioned as to why he or she required two extra days to complete a task that the project manager felt could have been done in 3 days. Was the person not feeling up to par? Was there an emotional issue involved such as a serious family concern? Did the person have other assigned tasks at the same time and if so how much time was devoted to them? These types of questions are all important to the measurement of such metrics since in the future the project manager could more accurately gauge the completion time for this individual under such circumstances and expect a quicker turn-around when they do not exist.
Nonetheless, “risk” is often calculated prior to the initialization of a project. For example let us assume that a primary member of the project has a family member who is seriously ill and may have to go into the hospital forcing that member to take some personal time. If that person indicates that such a situation is most likely, the project manager can assign an estimate of risk to the project at a particular time at say, 90%. This means that the manager now has two potential scenarios for his project estimations; one where his team member is out thus slowing the project and increasing the overall estimate or second, where the team member is not forced to take time and the estimate can remain as it was originally envisioned with a full staffing complement.
The complexity of such a tracking effort is exactly what makes such an effort counterintuitive to most technical managers. However, it is in fact a vital basis for software engineering and why when using such processes, the term “engineering” can be applied.
If we were to look at some of the top listed articles posted for metrics for Agile environments given the search terms below what we would find are discussions that either describe metrics tracking for after the fact of task\project completion without much detail describing the substance of actual metric development.
- how do agile teams measure success
- agile metrics
Another article suggests that for future projected “sprints”, if a particular team member is only assigned to the team less than 35% of the time their output is not considered vital enough to be counted and they are considered overhead to the team. In fact, the author’s statement on this matter makes no sense from an engineering standpoint…
“My response is that if a person works day in and day out to deliver the sprint objective (working software) then he/she counts, regardless of his/her title or role. If they are less than 35% assigned to the team, then I don’t count them at all. It will be more trouble to find them and ask them for information than their contributions yield – treat them as overhead. Oh, and you have an impediment.”From an Agile standpoint this matter may make perfect sense. However, from an engineering standpoint all it does is quickly dismiss someone who may in fact be a vital component to the project. For example, what if this part-time developer is working on a critical algorithm that only he or she has the experience to develop? You do not then have an impediment but a team member who may be only able to provide 30% of their time to the project but is producing a critical part for it. That person then must be as much included in such project metrics as the one who is on the project working full-time. If not, how then would a project manager understand the ramifications for a similar project in the future?
Here is another example from another of the articles on defect tracking…
“Defects: Defects are a part of any project, but agile approaches help development teams proactively minimize defects. Tracking defect metrics can let the development team know how well it’s preventing issues and when to refine its processes.This article is quite correct to note that defect tracking is an important metric to be tracked. However, when discussing project schedules there is no way to understand what defects may or may not be created during the endeavor. In addition, defects are often caused by specific circumstances within the project such as poor requirements analysis, rushed planning, inaccuracies in technical specifications, exhaustion, just to name a few.
The number of defects and whether defects are increasing, decreasing, or staying the same are good metrics to spark discussions on project processes and development techniques at sprint retrospectives.”
One of the ideas behind proper project metrics is to eliminate as many potential defects as possible prior to the release of an implementation into production since from a software engineering standpoint, deliverables that enter into production with defects can be considered part of the statistic of “failed” projects implying that the project was not run properly.
Agile is often stated as providing the business with “value”. In fact, to be specific as one of the articles reviewed states, “One of the cornerstones of Agile is to work on what’s most important to the business and deliver value as soon as possible.”
However, this is not software engineering! The idea of “value” to a business can be anything as long as they get it quickly.
Software engineering, on the contrary, defines such value as a deliverable that meets the project’s specifications, which the users are satisfied with because the product not only performs as expected but without defects.
Using either the Agile principals or software engineering standards when applied to metrics one could easily find two different sets of results given the foundations that the articles suggest.
In software engineering, the use of metrics is to provide information on what works and what doesn’t based upon team members and teams and the factors that can impede their success. On the individual basis they are not used to degrade anyone’s performance but to understand under what conditions individuals will perform at their best. They are a basis for forecasting the success of any given project based on a wide array of criteria. They are accrued during and at the completion of the project to provide average weights for the future that may or may not require adjustments in team structures based on the projects they may take on.
With Agile, metrics appear to be mostly an afterthought; something to measure to see how fast a team can generate results that are acceptable to the business. And if the qualification of “value” is not important to most businesses than Agile is fine as a process paradigm for mostly small endeavors and maintenance. Anything larger and standard Agile techniques have already begun to show their limitations as recent articles suggest.
Which metrics are needed for project success?
Software project metrics are measurements, formulas, or constructs that help form raw data into patterns that can be used to improve the project outcome or process, identify trends, and compare the project to other projects or to earlier versions of the same project. They are often used to report project information to management as they often sum up areas of the project in a clear and easy to follow way.Without them, it is often impossible to determine if a project is on track to meet important goals or deadlines. Furthermore, the effectiveness of improvements made to the process cannot be measured or compared.
The process of continuous improvement, where improvements made to each cycle of a software project make it better than the last relies on accurate, clear, and impartial measurements.
Metrics fall into the following general categories:
Comparison metrics
Used to compare projects, or project elements, to previous versions of the project, or to other similar projects. The more similar these projects are to each other, the better and more accurate the comparison will be.
Tracking metrics
These are metrics that track the current progress of a project. For example, they might be graphs showing the amount of work completed or remaining, tables showing which components are complete, being worked on, or haven’t been started. This kind of metric is generally more immediate than other types, and shows the current state of the project in the present, past, or very near term future.
Prediction metrics
These are metrics that show the expected state of the project, or project components at a future time. For example, they might be graphs showing the projected defect counts during an ongoing project, or customer defects over time for a project that has shipped. While prediction metrics are typically very “fuzzy” and subjective by nature, they are very useful to compare against the current tracking metrics to determine if things are going as expected and, if not, how far off they are.
Informational metrics
These are metrics that are show information about an area of the project, without an expected result. They are primarily used to generate discussion. For example, a pie chart that shows different categories for the current defects on a project can be used as an informational metric. By itself, it doesn’t mean much, but it generates discussion about why there were more defects in one category or another. This kind of metric is distinguished from a tracking metric by the lack of an expected right or wrong result for it. Many times informational metrics can be combined with each other to become a higher-level tracking or prediction metric.
Process metrics
These metrics are used to show the conformance or deviation of a project to an established process or standards. For example, if the process for a project calls for all defects to be fixed and closed 30 days after they are opened, the process metric for this might be a graph or table that shows which defects were opened each month, when they were closed, and a colour-code to indicate which ones fell outside the 30 day window. If the process metrics for a project consistently show non-conformance, this may point to a problem with the process itself rather than the project it is measuring.
Most projects require metrics from all of these categories.
For metrics to be useful, everyone involved with the data collection and metric generation must be using the same data, and getting the same results. For example, when counting defects across the various components in a project, it is important that everyone involved in the counting has the same understanding of what constitutes a defect as opposed to a design change. If they don’t, the numbers will be off, and the metric will show incorrect results.
The only way to ensure common understanding is to fully document the definitions and process for deriving the metrics, as well as full definitions for the data that feed into the metrics. It is important to document all of this in a common place and ensure that everyone involved reads and understands it before the data collection begins.
Do not spend too much time trying to get the terminology exactly right. It is more important to use the terms consistently than to find the right term.
Defining the metrics
To decide what metrics to use to measure a project, first decide what parts of the project are most important. Metrics that measure those aspects will also be very important. Start by breaking the project elements down into the following categories:Things that must get done (key metrics)
These are the essential, core aspects of the project. If the product can’t ship without the construction of five features, then measuring the number of features is an important metric. It is important to distinguish the things that must get done or the project fails from the things that are desirable, but which will not cause outright failure. By measuring the key project elements first, attention is focused on the most important part. If this part fails, the rest won’t matter.
Things we want
It is human nature to produce and work toward more of anything that is measured in a visible manner. So, if we state that code features are important, we shouldn’t be surprised when we come back a month later and there are a couple of hundred more code features, built at the expense of other aspects of the project that weren’t measured. This is why it is necessary to identify those parts of the project that we really do want more of, and measure those. If we measure extraneous, unimportant things, we’ll end up with more unimportant tasks consuming valuable project time to meet the metric. This problem can be mitigated by using multiple metrics in different areas, so that no single metric becomes the driver for the project activities and goals.
Narrowing the metrics down
A large list of metrics will require too much time and effort to collect and measure. Furthermore, if some of the metrics are not particularly useful, they will interfere with the message provided by the ones that are key. Because of this, it is often necessary to distill the list down to only the most important ones. Start with keeping all of the key metrics, and narrow down the number of others. To help with this, use the following guidelines:
Data that is easy to measure
Data and metrics are not the same thing. Data is used by the metric to produce meaningful information. It is important to choose metrics that do not require data that is too difficult, time-consuming, or subjective. Otherwise, too much project time will be used to create and update them. The exception to this is a metric that is only expected to be updated once, or once a cycle. For example, comparing the number of total defects found during a product development cycle with the number of defects found in the field after it ships may be difficult, but it is only done once. In this case, the usefulness of the metric may outweigh the time and effort required to collect the data for it.
Things we want, not things we don’t want
It is often easier to see the things that get in the way of a project than the things that move the project along. However, most people are happier and more productive working toward the things they want, rather than working *away* from the things they don’t want. Because of this, it is better to measure the positive, desired outcomes instead of the impediments. The exception to this is anything that would compromise the core elements of the project and cause it to completely fail.
Limit the number of informational metrics
Informational metrics are metrics that don’t have a desired outcome. Some of these are useful for generating discussion, but too many of these are distracting and waste time. Sometimes moving these kinds of metrics to the end of a project, after it ships, can make them more useful for planning, while less burdensome than they would be during the active production cycle.
Decide what form the metric should take when reported Charts, tables, graphs, and summary text bullets are all possibilities for reporting methods. Making it look good is far less important than making it clear and easily understood. The 3D bi-modal bar chart may look impressive, but if your audience can’t understand what it means quickly and easily, the usefulness is lost.
Simple charts and tables tend to be easier to quickly comprehend than text. Decide how often
the metric should be updated and reported. The metric itself is the most important part, but if it is reported too frequently, or not frequently enough, the usefulness will be lost. Key metrics should be reported often, as a failure in one of these that is discovered toward the end of a project will be very expensive.
Some metrics should be measured weekly, and reported monthly. Some should be measured more often, but reported only when they fall outside of a range. Metrics that are hard to determine, or which require hard to find data, should be reported less often.
The higher the level, the better the metric
The more data behind a metric, the more accurate it will be. This is because a single incorrect data point will have less influence if there are many other correct points to smooth out the result. For example, metrics at a component level, when rolled up to a project level, are often more accurate. Conversely, metrics that are taken down to the individual level are often very inaccurate and misleading. This is why metrics typically don’t work well for measuring things like individual performance, but do work to measure overall productivity rates for an entire project.
Use several metrics, not just one
Multiple metrics work better than just one for the same reason more data works better than less data. If one metric is reporting a skewed result, the other metrics will smooth out the result. Five metrics that show everything is going well, and one that shows it is not, is a cause to question the metric, not the project.
Use metrics to praise, not punish
When used positively, metrics can inspire and motivate. For example, giving out awards or thanks for driving down defect levels increases teamwork. Punishing the group that had the highest levels will cause people to work around the metric and incorrectly report data. In the end, all metrics rely on at least some data collection. If people are scared of the metrics, they will find a way to manipulate the data.
Improve the metrics
Metrics are often used to make small corrective improvements to a project or process and then measure the results to see if it worked. Over time, this can lead to significant improvements. These corrective improvements should also be made to the metrics themselves. As more data is collected and the actual results are compared against the metric results, it should be possible to improve the metrics. Metrics that prove to be less useful over time can be discarded and replaced.
The most important metric required for accurate project estimation
Metrics are an often misunderstood tool. Taking the time to carefully define appropriate metrics for your project or area ensures that the process of continuous project improvement is possible. When carefully defined and used, they can be effective, accurate, motivators of behavior. Many metrics are determined at the completion of a phase or task. Yet, how does one begin to use metric analysis to determine the estimated length of time that any project may take? This type of metric then acts as a control in order to refine all other tracked metrics.Thus, the most important metric to begin with is that which analyzes your expected project modules in order to determine the expected amount of time it would take to complete each one. One way this can done through mathematical analysis such as “Function Point Analysis”. Using such a tool you can determine a base-line for each component of the project and then apply “risk” factors as necessary to extend the individual times.
The next article in this series will take a look at implementing such a tool even when there is no prior history to generate mean, average numbers.
This article first appeared on Tech Notes, Black Falcon Software’s technical blog.
Source: https://jaxenter.com/
No comments:
Post a Comment