Wednesday, August 17, 2005

How Delphi Technique Works?


The process can be completed in a few short meetings by a panel of experts, by the corporate associates at large in a series of questionnaires, or by a hybrid of the two. The description below is vague when company policy or facilitator discretion may be used to invoke a variation.

  1. Pick a facilitation leader.

  2. Select a person that can facilitate, is an expert in research data collection, and is not a stakeholder. An outsider is often the common choice.

  3. Select a panel of experts.

  4. The panelists should have an intimate knowledge of the projects, or be familiar with experiential criteria that would allow them to prioritize the projects effectively. In this case, the department managers or project leaders, even though stakeholders, are appropriate.

  5. Identify a strawman criteria list from the panel.

  6. In a brainstorming session, build a list of criteria that all think appropriate to the projects at hand. Input from non-panelists are welcome. At this point, there is no "correct" criteria. However, technical merit and cost are two primary criteria; secondary criteria may be project-specific.

  7. The panel ranks the criteria.

  8. For each criterion, the panel ranks it as 1 (very important), 2 (somewhat important), or 3 (not important). Each panelist ranks the list individually, and anonymously if the environment is charged politically or emotionally.

  9. Calculate the mean and deviation.

  10. For each item in the list, find the mean value and remove all items with a mean greater than or equal to 2.0. Place the criteria in rank order and show the (anonymous) results to the panel. Discuss reasons for items with high standard deviations. The panel may insert removed items back into the list after discussion.

  11. Rerank the criteria.

  12. Repeat the ranking process among the panelists until the results stabilize. The ranking results do not have to have complete agreement, but a consensus such that the all can live with the outcome. Two passes are often enough, but four are frequently performed for maximum benefit. In one variation, general input is allowed after the second ranking in hopes that more information from outsiders will introduce new ideas or new criteria, or improve the list.

  13. Identify project constraints and preferences.

  14. Projects as a whole are often constrained by total corporate budget, or mandatory requirements like regulatory impositions. These "hard constraints" are used to set boundaries on the project ranking. More flexible, "soft constraints" are introduced as preferences. Typically, hard constraints apply to all projects; preferences usually apply to only some projects. Each panelist is given a supply of preference points, about 70% of the total number of projects. (For example, give each panelist 21 preference points if 30 projects have been defined.)

  15. Rank projects by constraint and preference.
    1. Each panelist ranks the projects first by the hard constraints. Which project is most important to that panelist? Some projects may be ignored. For example, if the total corporate budget is 100 million, the panelist allocates each project a budget, up to the maximum requested for that particular project, and such that the total of all budgets does not exceed the $100 million. Some projects may not be allocated any funding.
    2. Next each panelist spreads their preference points among the project list as desired. Some projects may get 10 points, others may get none, but the total may not exceed the predefined maximum (21 in our example above).

  16. Analyze the results and feedback to panel.

  17. Find the median ranking for each project and distribute the projects into quartiles of 25, 50, and 75-percentiles (50-percentile being the median). Produce a table of ranked projects, with preference points, and show to the panel. Projects between the 25th and 75th quartile may be considered to have consensus (depending on the degree of agreement desired); projects in the outer-quartiles should be discussed. Once the reason for the large difference in ranking is announced, repeat the ranking process.

  18. Rerank the projects until it stabilizes.

  19. After discussing why some people (minority opinion) ranked their projects as they did, repeat the rankings. Eventually the results will stabilize: projects will come to a consensus, or some will remain in the outlier range. Not everyone may be persuaded to rank the same way, but discussion is unnecessary when the opinions stay fixed. Present the ranking table to the decision makers, with the various preferences as options, for their final decision.


1 comments:

Anonymous said...

Uh. wts all this jibberish. R u coming for TechEd?

Ameen

Be a PMP
 

PROJECTIZED. Copyright 2008 All Rights Reserved Revolution Two Church theme by Brian Gardner Converted into Blogger Template by Bloganol dot com