Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Education Technology

Computers To Mark English Essays 243

digitig writes "According to The Guardian, computers are to be used in the UK to mark English examination essays. 'Pearson, the American-based parent company of Edexcel, is to use computers to "read" and assess essays for international English tests in a move that has fueled speculation that GCSEs and A-levels will be next. ... Pearson claims this will be more accurate than human marking.' Can computers now understand all the subtle nuances of language, or are people going to have to learn an especially bland form of English to pass exams?"
This discussion has been archived. No new comments can be posted.

Computers To Mark English Essays

Comments Filter:
  • Context... (Score:3, Interesting)

    by borgheron ( 172546 ) on Saturday September 26, 2009 @12:34AM (#29546459) Homepage Journal

    "Time flies like the wind, fruit flies like a banana." -- Groucho Marx

    This is a classic example of context which a machine would fail to get. :)

    I would like to see an automated engine figure that one out.

    GC

  • Re:Context... (Score:2, Interesting)

    by NotQuiteReal ( 608241 ) on Saturday September 26, 2009 @12:37AM (#29546471) Journal
    "Time flies like the wind, fruit flies like a banana." -- Groucho Marx

    I am sure it was hilarious when Groucho delivered that line, spoken. As written? Meh.
  • Depressing (Score:4, Interesting)

    by Comatose51 ( 687974 ) on Saturday September 26, 2009 @01:00AM (#29546555) Homepage
    Not sure if things were any better at one time but the way writing is taught today in public schools generates horrendous results. I remember being taught a very formulaic way of writing essays: six paragraphs, introductory paragraph, concluding paragraph mirrors the introductory paragraph, and all paragraphs start and end with some transition to next paragraph. Then there is the need to satisfy some specific length, although this is quite understandable. It took a college education and many years of reading to undo these "lessons" and really discover the joy of writing essays. Thank you Paul Graham and Nicholas Kristof among many others. I see the same thing happening to high school students I am mentoring. They write very boring essays with a ton of fillers full of sentences structured in a way to use more words than necessarily and make the meaning more ambiguous. Poetry aside, writing is to convey ideas and the value is in the ideas themselves, not really in the words and sentences. The way writing is taught today, the words and sentences get in the way of the ideas. The trend of using computers to grade papers is only adding to this rigid, boring way of writing. One thing I've learned about high school students is that even the low scoring ones are very clever at getting around rigid rules. I had seen a student who knew very little about biology do her homework by scanning in her book for specific phrases mentioned in the questions and looking for some semblance of an answer once she's found the phrases. By the time she was done, she hasn't even read the chapter but her answers would probably get her a "C" -- good enough for her. I'm afraid students will do the same in writing once they realize that computers are grading them.
  • Re:I doubt it! (Score:5, Interesting)

    by kklein ( 900361 ) on Saturday September 26, 2009 @01:44AM (#29546699)

    As an English prof myself, I'd like to confirm that we spend a lot of time on students' papers. Good papers are easy to breeze through, but the worse the paper, the more time it takes.

    As for machine-grading goes, people have been working on that for 30 years. I have no doubt that, statistically, it can provide useful results.

    The problem I'm seeing in these comments, however, is a common confusion of testing for assessment and standardized testing. I can't imagine using software to grade a student's paper in class. The student-teacher relationship is a personal one. That person is paying me to help them get better at writing, for example. It is my job to pore over that paper and show them where and how they can improve.

    I am also a tester (I actually mostly work with multiple-choice data, but I've also worked on performance rating--speaking and writing). The relationship between a rater and an examinee is very different from that of a teacher and student. The examinee is paying the rater to put them on a scale with other people. This is not a fine-grained assessment; it is always done at extremely "low resolution." When rating a paper for something like the GRE or other standardized test, it is the rater's job to compare the paper to scoring rubrics and make a call on which box of text best describes the paper, and then make note of the number in that box. That's it. It can't really go any more in-depth than that.

    For this reason, your comment about "five-paragraph themes" is an important one: Test task design always needs to be clear about what kind of performance is expected, because it is nigh impossible to write rubrics that can be applied to any performance (believe me on this, I beg of you). However, this is actually a question of test specification, not of the software or raters in question. Personally, as someone who works in EFL, I am actually in favor of retaining the "five-paragraph" formula, at least for timed essay tasks. That format is at the heart of all good rhetoric. Yes, it's stilted and silly, but if you can do it, it means that you know basically how information is expected to be organized in Western, especially Anglophone, societies. No good writer would actually use it, but any good writer could.

    Again, this is about putting people in boxes, not reading their essays. I can rate a 1-page essay in about 2 minutes, with excellent model fit (I have always used many-facet Rasch modeling for my multi-rater performance testing). I have no doubt that software could be employed whose ratings would be highly predictive of those of human raters.

  • by a whoabot ( 706122 ) on Saturday September 26, 2009 @01:55AM (#29546723)

    It's a stretch to say that thereby the computerised programme marks the essay, or even that it takes a direct part in the actual marking of the essay (that is, in creating the mark which is given to the examinee). The programme really marks the human marker in that scenario, if it marks anything at all.

  • Re:Judging from... (Score:3, Interesting)

    by Firehed ( 942385 ) on Saturday September 26, 2009 @02:09AM (#29546761) Homepage

    Actually the last time I did any serious writing in a word processor (at least two years ago), I found that enabling inline grammar checking and setting it to the strictest mode did tend to improve my writing. There were a few exceptions (it can never seen to decide between affect and effect), and while the suggestions weren't always great, it seemed to catch errors in syntax and structure often enough that I could go back and overall improve the writing.

    That being said, it's certainly not foolproof and absolutely not ready to replace a human - let alone a trained English teacher. I'm sure it could catch papers that ought to fail miserably with relative ease, but once you get into papers that would get probably a C or better, it's time for something with a brain to take over.

  • by aepervius ( 535155 ) on Saturday September 26, 2009 @02:27AM (#29546837)
    You are probably only speaking of writing essay for random subject in "English" lessons. because in my experience in physic, biology, math I saw horrendous grammatic errors made by people in their own language (german, french) that even I not speaking the language would have not made. but their organisation and the clarity to which they explained their reasoning was perfect. I am ready to bet, that some people just overlook the form (grammatic and spelling) and cocnentrate on the content. That does not mean they are disorganized or sloppy. And frankly, in my own little experience with multiple language, people not forgiving the form, are usually those which are not able to grasp the meaning anyway.
  • No and no (Score:5, Interesting)

    by grikdog ( 697841 ) on Saturday September 26, 2009 @02:46AM (#29546923) Homepage
    I've scored English essays for professional testing services, and I've seen the results of robot scoring. It's pretty shoddy. No, computers are not able to distinguish between a paragraph of As I Lay Dying (William Faulkner) and a gallon of sophomoric babble by say, yours truly. However, within the confines of a particular exam, where the topic is known, responses are predictable, and all the supplicants hew to the general line, the 'bots can detect subpar, adequate, above average and (sometimes!) abnormally brilliant expository prose, thereby ranking papers reasonably well on the usual six point scale.

    It's worth pointing out that certain types of exams are designed to elicit extraordinary prose from respondents, that which yields a sense of competence or even brilliance, say. In these cases, the idea is not so much to detect the high end of the bell curve, but to identify the tiny pool of applicants who may be capable of Nobel Prize work in future realms of science or service. No 'bot can do that job, just as no 'bot except Deep Blue can beat Gary Kasparov, and no 'bot at all deserves the monicker Fujiwara no Sai (although Go-playing 'bots are approaching the mid-levels of highly ranked amateur players).

    That's the objective part. My personal opinion is that using robots to sort the hopes and aspirations of college-bound men and women is just begging for lawsuits. It's an approach in which differences of opinion quickly escalate to class action against universities as well as test administrators, and would not be an approach I could comfortably recommend.
  • by Alain Williams ( 2972 ) <addw@phcomp.co.uk> on Saturday September 26, 2009 @03:39AM (#29547059) Homepage
    Will it decide if the following is well spelled ? If it doesn't like the spelling, will it give it marks for irony ?

    My New Spell Checker

    Eye halve a spelling chequer
    It came with my pea sea
    It plainly Marx four my revue
    Miss steaks eye kin knot sea

    Eye strike a key and type a word
    And weight four it two say
    Weather eye am wrong oar write
    It shows me strait a weigh

    As soon as a mist ache is maid
    It nose bee fore two long
    And eye can put the error rite
    Its rare lea ever wrong

    Eye have run this poem threw it
    I am shore your pleased two no
    Its letter perfect awl the weigh
    My chequer tolled me sew

    (Sauce unknown)

  • by icebike ( 68054 ) on Saturday September 26, 2009 @03:45AM (#29547075)

    It also scores great writing and even greater speaking very inconsistently.

    When fed Kennedy's "I am a Berliner" speech these systems always scored it rather low. Repetitious. Gratuitous use of foreign words: Ich bin ein Berliner.

  • by PDX ( 412820 ) on Saturday September 26, 2009 @04:19AM (#29547149)

    Feed it real Shakespeare and watch it grade him an imbecile with poor grammar.

  • by TheRaven64 ( 641858 ) on Saturday September 26, 2009 @04:57AM (#29547219) Journal
    You manage to post three links to Newspeak, but are apparently completely ignorant to its history. Newspeak was based on the simplified version of English used by the BBC World Service. Orwell had a job translating political speeches into this dialect in the '40s (I think; read the article you linked to to check the actual date) and noticed that much of the subtlety and nuance was lost in this translation. He invented Newspeak based on this experience, theorising that you could tweak this dialect of English to make it impossible to express certain political ideas at all. For those with an interest in psycholinguistics, this is a specialised form of the strong linguistic relativity hypothesis (which is largely discredited, although the weak linguistic relativity hypothesis is generally accepted).

    So, no, this is not the beginning of real-world Newspeak. The beginning was over half a century ago.

  • by Panoptes ( 1041206 ) on Saturday September 26, 2009 @04:57AM (#29547221)
    I teach IGCSE first and second language English, AS and A level English, IELTS, and the occasional TOEFL course. In these examinations there, is in fact, an "interests the reader" criterion explicitly set out in the marking scheme.

    As to the argument that writing mistakes and errors correlate with poor quality writing, I can agree to a certain extent. If the examinee is a native English speaker, it may well hold true in the majority of cases. But if English is their second - or a foreign - language, there is a much weaker correlation.

    Language register (degrees of formality) is important in these examinations, especially the IGCSE and AS level English. There is also an important differentiation between grammar (the basic rules of language) and structure (putting elements together using appropriate linking words and punctuation). Good structure possesses the quality that linguists call cohesion. Nowadays examinations tend to be less strict about grammar, and place more emphasis on the command of structure.

    My own take on machine marking of English composition may be summed up in two words - utter bollocks.
  • by markov23 ( 1187885 ) on Saturday September 26, 2009 @07:32AM (#29547619)
    The paper scoring technology that I am familiar with ( used by the GRE's and some high school English classes ) cant be fed a random paper -- it needs to be trained on a particular assignment. Then it can score papers for that assignment. The success that they get with these is pretty surprising -- but the application is limited to these types of tests or curriculum that is designed around the assignments it has been trained for. The more interesting affect from this type of system reported from students ( not gre takers ) is that it lets them write a paper -- get it scored, make changes and see if they are getting better. When I was writing papers in high school -- you wrote it -- handed it in, then a week later got a grade and never thought about it again. This type of technology actually allows you to learn a lot more from one paper by iterating several versions and getting direct and specific feedback on how to improve.
  • by nickspoon ( 1070240 ) on Saturday September 26, 2009 @07:49AM (#29547657)
    I know this is Slashdot and the majority of you are boring, but the 'inefficiencies' of the English language (and all other natural languages) are what make spoken and written English interesting and artistic. Sure, English is a stupid language if you were to assess it on its regularity, unambiguity and precision, but it is precisely this irregularity, ambiguity and imprecision which make it beautiful. And that, more than fully accurate communication, is the essence of language.
  • by xigxag ( 167441 ) on Saturday September 26, 2009 @08:36AM (#29547783)

    Formal written language is different from casual spoken language in terms of grammar, syntax, vocabulary. That's true in any literate language. So, for the purpose of writing an English paper (or some of the things it prepares you for: a newspaper article, a grant proposal, a cover letter, etc.) some of the things that you are saying "aren't actually mistakes" are actually mistakes.

  • by nickspoon ( 1070240 ) on Saturday September 26, 2009 @09:31AM (#29547991)

    Besides constructed languages, this is the case for practically every language there is. There are always irregularities; this is down to the inherently human nature of linguistic evolution. If you learn English without a single irregularity, what you have learned is not really English, but some other English-derived language which English speakers will be unlikely to understand at all - at which point, you may as well force everyone to learn Esperanto.

    I also rather doubt that getting rid of odd past tense forms would really make learning English a great deal easier.

  • by notxarb ( 621681 ) on Monday September 28, 2009 @12:11AM (#29562251)
    I can't say if they have used this a lot in the past, but I know from some computational theory classes that language analysis is really hard to do. A lot of grammar checkers that I've seen are not very good.

"Money is the root of all money." -- the moving finger

Working...