191
edits
| (7 intermediate revisions by the same user not shown) | |||
| Line 1: | Line 1: | ||
== Background == | |||
This skill has come up repeatedly as being a key metric of a quality web developer. In some situations, such as the 2010 [http://sxsw.com/ SXSW] session for the P2PU School of Webcraft, people who emerged as particularly skilled in helping other people achieve their goals by being both proficient and timely in their answers gained special recognition from the community. | |||
== Good at answering other people's questions == | == Good at answering other people's questions == | ||
''Breaking down this | ''Breaking down this skill into component parts, we get:'' | ||
* Response time (or timeliness) – working habits, etc. | * Response time (or timeliness) – working habits, etc. | ||
| Line 29: | Line 33: | ||
*** Any other relevant info? | *** Any other relevant info? | ||
** This metric, once the information above has been provided, is fully automated. We simply need to generate a datasheet each day (or longer interval, if appropriate) which captures the lag time between the posting of the question and the posting of the answer. For ASAP folks, the actual time intervals will serve as the scores. For people answering questions on some regular interval, the scores can be converted to 0s and 1s according to whether each question was actually answered in the specified time interval. | ** This metric, once the information above has been provided, is fully automated. We simply need to generate a datasheet each day (or longer interval, if appropriate) which captures the lag time between the posting of the question and the posting of the answer. For ASAP folks, the actual time intervals will serve as the scores. For people answering questions on some regular interval, the scores can be converted to 0s and 1s according to whether each question was actually answered in the specified time interval. | ||
** ''Based on conversations on 8 July 2010 - The simplest metric here is just '''total number of comments''', which can be captured for everyone and displayed om their portfolio page. This will happen automatically. For participants who want more refined data, they will need to provide the P2PU system with some additional information about themselves and how they would like to be evaluated (see above). Once those data have been provided, then the system should be able to manage the rest of it automatically, again publishing the data on the individual's portfolio page. The individual can choose to display those data publicly or not.'' | |||
* '''Completeness:''' | * '''Completeness:''' | ||
** This metric provides evidence that the person answering questions sufficiently understands what the questioner might need to act upon the answer to good effect. For example, an expert with a question would probably be satisfied with a high-level and targeted response where most of the intermediate steps were omitted. In contrast, a novice might just get more lost and confused without fairly explicit directions that start from the beginning. | ** This metric provides evidence that the person answering questions sufficiently understands what the questioner might need to act upon the answer to good effect. For example, an expert with a question would probably be satisfied with a high-level and targeted response where most of the intermediate steps were omitted. In contrast, a novice might just get more lost and confused without fairly explicit directions that start from the beginning. | ||
| Line 60: | Line 65: | ||
** The resulting data can be rendered in raw form (e.g., % of total answers which were considered mostly helpful) or in comparative form (e.g., relative rank of helpfulness as compared to others answering questions, subject to some baseline comparability criteria). | ** The resulting data can be rendered in raw form (e.g., % of total answers which were considered mostly helpful) or in comparative form (e.g., relative rank of helpfulness as compared to others answering questions, subject to some baseline comparability criteria). | ||
* The second stage would remain hidden until someone responds to the first-stage question, and even then it would remain hidden unless the person who wrote the answer/comment wanted to have the more detailed evaluations performed on his/her work. For example, if I have opted in to being evaluated on all of the metrics described above, the P2PU system would embed some computer annotation in the system which would specially mark any answers I gave (presumably I have to log in to post an answer or comment to a query). This mark could either be hidden or visible, depending on how we think it might motivate behavior. A visible mark would signal to people that the person is seeking additional feedback and could motivate them to participate. But a hidden mark would make it more likely that people are engaging initially because they were motivated by the quality of the question or the subject matter and might therefore be more authentic. Either way is likely to be fine, and perhaps that is a choice we can give as part of the opt-in form. | * The second stage would remain hidden until someone responds to the first-stage question, and even then it would remain hidden unless the person who wrote the answer/comment wanted to have the more detailed evaluations performed on his/her work. For example, if I have opted in to being evaluated on all of the metrics described above, the P2PU system would embed some computer annotation in the system which would specially mark any answers I gave (presumably I have to log in to post an answer or comment to a query). This mark could either be hidden or visible, depending on how we think it might motivate behavior. A visible mark would signal to people that the person is seeking additional feedback and could motivate them to participate. But a hidden mark would make it more likely that people are engaging initially because they were motivated by the quality of the question or the subject matter and might therefore be more authentic. Either way is likely to be fine, and perhaps that is a choice we can give as part of the opt-in form. | ||
* ''Based on conversations held 8 July 2010 - We decided that it makes more sense for all of these commenting fields to be available by default, but then route the data to an anonymized aggregate as well as each person's individual portfolio page. The person can then choose to make those data public or not. See section on technical details below for further info.'' | |||
* Depending on which categories I chose to be evaluated on, a new dialog (or however we want to code it) would pop up once someone rates my answer. This new dialog might ask: How was this answer useful (or how not)? Please consider the following specific categories (from above): completeness, clarity, accuracy, tone, etc. | * Depending on which categories I chose to be evaluated on, a new dialog (or however we want to code it) would pop up once someone rates my answer. This new dialog might ask: How was this answer useful (or how not)? Please consider the following specific categories (from above): completeness, clarity, accuracy, tone, etc. | ||
** In each case, we will probably want to use a Likert scale (e.g., 1. Very clear, 2. Clear enough, 3. Somewhat clear. 4. Not so clear. 5. Totally confusing.). If possible, we may also want to provide a text box for additional comments. It is hard to systematically deal with freeform comments, but the feedback is likely to be quite valuable to the person being evaluated and is in keeping with P2PU's general educational mission. | ** In each case, we will probably want to use a Likert scale (e.g., 1. Very clear, 2. Clear enough, 3. Somewhat clear. 4. Not so clear. 5. Totally confusing.). If possible, we may also want to provide a text box for additional comments. It is hard to systematically deal with freeform comments, but the feedback is likely to be quite valuable to the person being evaluated and is in keeping with P2PU's general educational mission. | ||
| Line 73: | Line 79: | ||
The third image below shows the optional pop-up dialog that asks for additional information. As detailed in Section 3, above, this dialog would be hidden unless two things were true: 1) the person who wrote the original comment opted in to having it appear, and 2) someone was motivated to indicate whether the comment is useful or not. The dialog itself could be dynamic in the sense that it may only contain those specific follow-up queries and metrics which are of interest to the commenter. The terms would all need to be links to explanatory pages, and the scale may need to be fleshed out a bit. But in essence, this is all that would be necessary to have a system that can capture peer feedback at scale for various attributes of question-answering abilities. | The third image below shows the optional pop-up dialog that asks for additional information. As detailed in Section 3, above, this dialog would be hidden unless two things were true: 1) the person who wrote the original comment opted in to having it appear, and 2) someone was motivated to indicate whether the comment is useful or not. The dialog itself could be dynamic in the sense that it may only contain those specific follow-up queries and metrics which are of interest to the commenter. The terms would all need to be links to explanatory pages, and the scale may need to be fleshed out a bit. But in essence, this is all that would be necessary to have a system that can capture peer feedback at scale for various attributes of question-answering abilities. | ||
[[File:P2PU_commenting_mock-up_3.png]] | [[File:P2PU_commenting_mock-up_3.png]] | ||
=== Further technical details === | |||
This version - 12 July 2010 | |||
* The default for answer commentary should be "on." | |||
* The data should flow to two places: | |||
** The commenter's portfolio page, where the data are only visible to that person when they are logged in and to no one else (default). | |||
** An aggregated P2PU data view, which can show standard stats (#s of comments, average scores (and median, mode, range) for comment characteristics, etc. | |||
* The portfolio data can be made public by choice. Only publicly rendered data will benefit from the networked characteristics of the system. For example, for any given answer, there will be some number of comments/ratings of that answer by the community. Everyone can see the scores (anonymized and aggregated), but people can also view individual scores and comments ''only for those commenters who have opted to have their contributions be publicly viewable.'' | |||
* The benefit of making comments publicly viewable (besides enhancing the value of the comments for the ecosystem) is that it allows people to publicly display competency scores for their answers and the comments on their answers. | |||
edits