Number 7 - Computer Grading Will Destroy Our Schools


                       Robots 3.jpg
As you know if you teach at a U.S. public school – or even if you just read the June 2013 New York Times feature about it – a consortium of state boards of education recently decided that we should have uniform standards about what we teach our kids. Hence, the alliteratively-named “Common Core curriculum” is here. While you probably know that this curriculum is going to bring with it an increase in standardized testing, what you might not know is that the curriculum is also going to bring with it a strong push to have computers grade student writing. As I write this, about a half-dozen companies are vying for government contracts to do just that. Each of them has brought together a team of statisticians, linguists and computer programmers to produce the best possible software capable of automatically grading essays.

The reason for the push is both grim and obvious: money. Now that our schools are going to have more standardized tests, there are going to be more student paragraphs thatneed grading. Grading written work is laborious and time-consuming and, from a school board’s point of view, expensive. What computers offer is the ability to do this task faster and – once they are up and running – more cheaply.To be fair to the school boards, assuming the computer programs work, doing this task more efficiently could yield some benefits. It would lift a significant burden from the harried, overworked and underappreciated group of teachers and grad students we currently pay to grade standardized tests. But I suspect that, for most people, the thought of a computer “reading” essays is reflexively anxiety-provoking. It brings out the inner Luddite. Are we really supposed to believe that a machine can do just as good a job as a human being at a task like reading?

“Text analysis,” as computer programmers call it, is the process by which computers are able to grade essays. Text analysis employs techniques called machine learning to accomplish this feat – in particular, it uses supervised (vs. unsupervised) machine learning. The difference is that, in supervised machine learning, the computer’s grades are originally derived from – and then cross-checked against – the grades given by actual human beings. The unsupervised process, by contrast, tries to skip the human beings altogether.

The way supervised machine learning basically works is this: The computer treats the student’s essay as though it were just some random assemblage of words. Indeed, the jargon term for the main analytical technique here is actually “Bag of Words” (the resonance of which is simultaneously kind of insulting and weirdly reassuring). The program then measures and counts some things about the words that the programmer thinks are likely to be correlated with good writing. For example, how long is the average word? How many words are in the average sentence? How accurate are the quotations from the source text, if any? Did the writer remember to put a punctuation mark at the end of every sentence? How long is the essay?

The programmer then “trains” the computer by telling it the grades assigned by human graders to a “training set” of essays. The computer compares – mathematically – the various things it has measured to the grades assigned to the essays in the training set. “Johnny’s essay had an average word length of 5 lettersand an average sentence length of 20 words. The human told me that Johnny gets a B+. Now I know that much about essays that get a B+.” From there, given an average word length, sentence length, word frequency and so on, the computer is able to calculate the probability that a given student essay would receive a particular grade. When it encounters a new essay, it can take the things it knows how to measure, and – based on what it learned from the grades assigned to the training essays – simply assign the most probable grade.

It turns out that this works surprisingly well. Shockingly well. What we might think of as totally surface-level or accidental features – like having more words per sentence – are actually correlated very strongly with earning better grades. Statistical analyses, at least, tell us that machine-learning techniques perform just as well as human beings – that is, their grades for new essays are the same or similar a huge majority of the time. Plus there are some ways in which the computers might actually be better. The people who presently do essay grading for tests like the AP English exam work long hours and have to grade essay after essay. Unlike computers, they get tired and irritable and bored. Also unlike computers, they come pre-equipped with a whole bunch of biases. These little nuisances and irritations, we might hope, will wash away if we instead let a computer sort unfeelingly through the bag of words.

But, alas, it isn’t so. Since the process by which the computer “learns” is anchored to grades assigned by human beings – the training set teaches the computer what kinds of grades we tend to give to what kinds of essays – the tiresome, unsexy little things that make us imperfect are built right into the system. For instance, if the graders who grade the training set tend to strongly penalize nonstandard uses of English – including nonstandard uses more common among racial minorities – so too will the machine. The computer will operationalize, and then perpetually reinstate, the botherations and biases we feed it. The best strategy, thus, will be to use mechanized, look-alike writing, which will be tautologically defined as good writing because it is associated with receipt of a good grade.

One obvious problem is that if you know what the machine is measuring, it is easy to trick it. You can feed in an “essay” that it is actually a bag of words (or very nearly so), and if those words are SAT-vocab-builders arranged in long sentences with punctuation marks at the end, the computer will give you a good grade. The standard automated-essay-scoring-industry response to this criticism is that anyone smart enough to figure out how to trick the algorithm probably deserves a good grade anyway. But this reply smacks of disingenuity, since it’s obvious that the grade doesn’t reflect what it’s “supposed to” – namely, the ability to write a reasonably high-quality essay on some more or less arbitrary topic.

Another slightly less obvious problem is that, since the computer is just measuring and counting, it can’t actually give you meaningful feedback or criticism. It has no idea what big-picture themes you were exploring, what your tone was, or even what you actually said. It just tries to approximate the score you should get – that is, it tries to put you into a little box. While this kind of box-sorting is fine for literalgrading, it doesn’t really help with teaching you to be a better writer.

There are other problems, too. A former professor at MIT named Les Perelman has pointed out that the way the automated-essay-grading companies are analyzing their software’s performance is unfairly biased toward the machine. Perelman’s paper, although eye-glazingly dense in data analysis, notes that while a human grader’s reliability is checked by comparing his or her grades to someone else’s, the machine’s reliability is checked against a resolved grade, which reflects the judgments of multiple human readers. But the standard statistical measure of agreement, called Cohen’s Kappa, is – as Perelman puts it – “meant to compare the scores of two autonomous readers, not a reader score and an artificially resolved score.”

There is a deeper issue, though, to which I suspect many people’s thoughts will have jumped immediately: the idea that reading and writing are uniquely human, and that our ability to do these things is part of what separates us from machines. Show me the most robust correlations in the world, and still I will show you – the horror! the horror! – a robot doing something I had firmly believed only a human could do.

While it may be tempting to dismiss these reactionary worries as empirically ill-informed, I think we should resist. There is, at the end of the day, something soul-shakingly serious about these feelings. The grandiose concepts they invoke – like “the place of humankind in nature” – are banalified and overused now, but I think it’s worth taking them seriously. If computers can do things that we thought only human beings could do, can we continue to think of ourselves as unique? If computers can carry out operations that we thought only the human mind could carry out, are we forced to think of our minds as essentially mechanical? I know these are heady questions, but I don’t mean to ask them as an invitation to fatuous navel-gazing. I just mean that when I honestly contemplate computers reading essays, they just sprout up, as insistent as nettles.

Here, then, is my attempt at an answer: Human beings possess an extraordinary set of intellectual capacities that are not replicable by any combination of computational techniques, no matter how sophisticated. These capacities are those that we traditionally think of as belonging to – and representing the achievements of – (no pun) the humanities. What computers will never do, that is, is create originalworks of art as emotionally rich, thematically evocative or aesthetically stunning as those created by human beings. There is, in other words, no set of computational techniques capable of mirroring the intelligence we use in creating original artistic works, especially those that reach our deepest emotional depths.

Some people may hasten to respond that apparently I am ignorant of what is already out there. There is, for instance, a program written by David Cope, a music professor at UC Santa Cruz, capable of engineering new works of classical music that sound just like those by, say, J. S. Bach. Indeed, Cope’s program is so artful in its imitations that trained classical musicians have mistaken its compositions for works by the laureled master himself. Cope’s software treats all common features of two musical works as indicators of “style,” measured along several dimensions, including rhythm, melody and harmony. The program then uses that information – plus certain randomizing and recombining functions – to create new compositions. Give it a database of any classical composer’s works, and it can create a pretty convincing mimesis.



In addition, there are programs that can write books. For instance, some algorithms can generate new novels in certain well-established and trope-defined genres like the whodunit or Harlequin romance. These programs, like Cope’s software, find common features of their target genre – character traits, mytharcs, sentence structures – and then recombine them. There is even a program developed by Russian computer scientists that has written its own take on “Anna Karenina” in the style of Haruki Murakami. After performing an extensive analysis of data about each of their books, it produced a novel called – I am not kidding – “True Love.”

If there are several things out there today that look an awful lot like a computer writing an interesting novel or composing a beautiful piece of music, doesn’t that suggest that, someday, a computer might succeed in creating truly moving artwork? Isn’t success, then, really just a matter of highly subjective artistic sensibilities? Indeed, isn’t this whole reaction against artificial intelligence (AI) just a rehash of the sniveling, woeful, “our culture-has-reached-its-nadir” nonsense that holds that thetrue humanities are beyond the comprehension of the plebes? The same “sweetness and light” nonsense that Matthew Arnold was peddling in 1869, when he argued that our culture would, by neglecting the humanities, “fall into our common fault of overvaluing machinery?”

No. And no for a simple reason. None of these programs can do the thing I’m talking about. In other words, they are, all of them, cheating. Not because there are not enormous intellectual challenges in getting a computer to imitate a great artist. But simply because miming another’s style is not the same thing as original, spontaneous creation. Sorry to be harsh, but it’s true: It is totally obvious that, given enough data about Bach’s piano concertos or Murakami’s literary idiosyncrasies, it is possible to manufacture a convincing imitation. But this is just categorically not the same kind of intelligence as that required to create artistic works of one’s own.

What AI is actually using is data analysis, modeling and measurement, which are, of course, good tools for mimicking works that already exist. But it is only by redefining the goal that this counts as success. There is another domain of human cognition, a creative domain, that draws on a different set of capacities altogether – intuition, aesthetic judgment, emotional awareness and self-expression. Spontaneous creation requires these capacities; mimicry through data-crunching can dispense with them. Show me a computer that engages them.

By its very nature as a pre-programmed device, a computer needs a human being to interpret its acts, which are themselves structured by its human creators. A computer has no long-shuttered pains, no treasured memories, no unhealed heartache, no silly childhood fondnesses, no snarled complexes about its parents, no sexual ecstasy – in short, nothing worth writing or painting or singing about at all. To do as we do, the computer would be forced to approximate all of the messiest parts of human experience – what Freud called the unconscious – through some mind-bogglingly complex amalgam of methods, the functional equivalent of living its own human life. If such a task is not actually impossible, it is so monstrously difficult that it might as well be.

So what? What does this have to do with automated essay grading or standardized testing or the fate of children in the U.S. public education system?
If you’re persuaded by what I’ve said about the limitations of AI, it follows that, as long as automated essay grading is around, the trend toward the mechanization of student writing is not going to change. The present state of automated essay grading is not just some stopgap measure until we can get better robots. It contains, subject to certain tweaks, all the essential elements of the full-blown future itself. In that future, not even the simulacrum of human responsiveness will be available on many of our most important assessments of writing. The apex of the academic year will be a test of writing that no human being will ever read, care about or feel anything for.

Thus, our culture will stop engaging with students on those very aspects of the humanities that make them worth studying in the first place. We are going to end up with a system that dispenses rewards in a way that is indifferent to – and divorced from – the most alluring parts of the humanities, those creative capacities that they let us engage. If our instruction in the humanities necessitates ignoring these abilities, then it is my opinion that there no longer is much point to teaching the humanities at all, and we should end the charade. In other words, if this kind of mechanized, standardized-test-friendly drivel is all we can offer our children as “the humanities,” then who cares about the humanities?

Once the use of automated essay grading becomes common knowledge, the implicit message will be hard to miss. For any self-aware, warm-blooded American teenager, the conclusion will be all but inescapable: Nobody cares what you have to say. It could be brilliant and moving; it could be word-salad or utter balderdash; it really doesn’t matter. Content, feeling, creativity, thematic depth – none of it matters. Today’s students will recognize this; they will react to it; and it will inform who they grow up to be. Indeed, I confess that if I were a teenager, my response would be the same as theirs – the selfsame response that we tend to associate with (and dismiss as just) teen angst. What is the point, after all, in being rewarded by a system that doesn’t care who you are? If no one is going to read the essays, we might as well rip them up.

But in truth, what is to be done? After all, don’t we need guidelines about what our kids should be learning in school, especially if we’re going to entrust them to it many hours a day? Are we really to believe that the answer is to let them do some wishy-washy, personal-feelings writing or painting and hope it will teach them the skills they need? What part of that is going to be useful for getting a job?

But this is precisely the problem. We have become so veritably obsessed with the ideas of performance and achievement and rank that we have let ourselves completely lose sight of the things we cared about in the first place. We try to subsume the qualifications for everything – even disciplines where right and wrong are not only subjective, they are immaterial – under neat, mechanized, formal criteria. Have we forgotten the feeling of naïve, impractical fascination? Have we forgotten that we, too, once were interested in things just because they grabbed our attention? Trying ferociously to kindle that spark with the needs of the “21st-century job market” or anxieties about GPAs is, of course, only going to extinguish it.

The deepest reason to get rid of automated essay grading is not that the statistical correlations aren’t good enough yet (this is fixable) or that one can cleverly trick the computer (this is true, but not the root issue). The reason to get rid of automated essay grading is that the whole point of doing something like writing an essay is to learn to engage on a level that machines cannot participate in or really appreciate. It’s to use the other part of your mind in an effort to communicate with other people. That, the profound and joyous sense of recognition that comes from communication, is the thing worth teaching, and it is the thing worth learning. We should put aside the pretensions of objectivity and practicality and get back to the part that really matters. It is time for us to slacken our grip.

8 comments:

Benjamin Broadbent said...

Sorry Ms Matthews
KaydenBorchowsky Aug 5, 2015
Essay Marking Computers could work. Maybe not with the technology that the developers are trying to make now, with the sentence length and the word count grading, but in the future when they can program computers to "read" through entire essays as they are, not just as bags of words.
I completely agree with the student when they say that this system, the one they are developing now, is easily fooled, that it is easy to trick it. However, if a group of academics have the time and patients to write thousands of excellence, merit, achieved and not achieved essays and feed these into the computer, it can learn what to look for when marking essays. It will learn that they need quotes and evidence, as well as the proper structures. It will learn exactly the same thing that students learn in our class, and as teachers learn in university. I mean, how does a teacher learn to mark essays? Surely their’s some sort of marking method they learn? A criteria to follow. I am no expert in computer science, but I can't see why can't a computer be taught the exact same thing. When they teach a computer how to properly grade an essay, it will only be as easily tricked as a teacher is.
Essays are about communication, right? Well, isn't this just a different way of saying the common idiom, "It's about the journey, not the destination?" However we have built cars to drive us places, or, better yet, we have built aeroplanes to fly us places. On aeroplanes what do most people do? Sleep. They miss the alleged "most important part of the experience,” but do you see people trying to swim all the way to America just so they can experience the journey first hand? No, well at least I hope you don't. So, taking that into account, are the essays students write about communication, or are they about the grade?
The bold claim the student writes about how, "Human beings possess an extraordinary set of intellectual capacities that are not replicable by any combination of computational techniques, no matter how sophisticated" is very arguable. This intellectual capability that cannot be replicated they talk about sounds a lot like consciousness. As we have discussed previously, we currently have no idea what this consciousness is, so, how do they know it cant be replicated. Sure, it can’t be replicated right not, but we are not talking about this technology being available at this moment. In fact, as I said before, I think the developers are going in the completely wrong direction with the word count grading thingy. Even Natasha with her vision of monstrous AI’s attacking her as an old lady is looking very far ahead in the future.
The cheating AI’s argument. This states that a computer is cheating when it replicates novels and songs because it is just imitating what someone else has done. Yes, of course it is, but isn’t that exactly what we want the computer to do when it marks essays. We are creating this machine, as we do with a lot of other technology, to do something Humans can do themselves. We want this machine to mimic what we can do. I personally hope that this future computer marks my children's essays the exact same way a teacher would.
This was a very long comment and I haven't even began to talk about how an AI could potentially become more intelligent that a human. I mean, look at all of the futuristic dystopian movies, where AI overthrows the human race. If a machine can operate a helicopter, can demolish the entire digital world, than surely it can do something as rudimentary as mark an essay. If you have taken eh time to read this far. Thank you and I hope you cant argue against all of what I’ve written.

Benjamin Broadbent said...

Terminator Teacher - Or I have Severe Trust Issues
natasha_scott Aug 5, 2015
I don't like the thought of sentient AI, and I never will, so in the future when we live alongside living robots or whatever, I'm going to be that one old lady standing on my porch wielding a gun and warbling, "Die, evil scum!"
On a more serious note, I think AI marking our grades would be very bad. Sure, essays are just a bag of words, but like someone suggested - I think it was Emily - that if someone wrote something was completely outside the box but still worth an Excellence, the computer wouldn't realise that and give the essay a non-achieved. I would be outraged if my 'marker' gave me a non-achieved if I knew it was at least a Merit or an Excellence. It might be unlikely, but some of the brighest and most intelligent minds of our generation could fail school because they think so creatively. And, on the other hand, people who like to cheat or rely on cheating to get through school might learn how to get Excellence through loopholes in the marking system by entering string of certain words. Cheaters could win, and hard workers could fail.
And there's the possibility of hacking. I don't know why someone would do this, but someone might target a certain person and make them fail all their grades by altering something in the computer.
Maybe a jealous cheater who's always stuck in detention sees a hard worker who's always getting Excellence, and hacks stuff to make them fail. Sure, Math and science might be more easily marked, but it can still be altered, and can still possibly be failed.
Robotic marking could open up a whole new section of cyber bullying.
I'm paranoid, but sometimes I feel like I have reason to be.

Benjamin Broadbent said...

Je pense que il y a un melon :3
lilly_zhang Aug 2, 2015
So, to start off, I'll be completely honest. I thoroughly dislike the idea of computer grading, and this article only added to that opinion. There will be a bias in this. Yes, it has its benefits, like cutting down fees and requiring less time from teachers, but at what cost do these benefits come?

In my spare time, I write, and I publish to a place that I'll refrain from talking about. When I publish, I can have other users comment, whether it be on the usage of foreshadowing, if they enjoyed it, or simply laugh at the filter that immediately reports the word 'kill' when jokingly threatened that I’ll be murdered. The knowledge there is someone else reading what I have put effort into -though somewhat minimalistic at times- is worthy of the elation, and to know people from countries ranging from the US to Kuwait or Oman (my demographics will prove me right, if you're wondering) is awesome. There's another person on the other side of the screen in a place I can’t even locate! Then again, current systems mean that the people who do mark my work are a little closer to home.

What would happen if I took away that person, and were left simply with a computer? It has recent software made to fit the specifications given, and can comprehend enough of what I have graced the internet with, but that is where we draw the line. Plus, I use the term comprehend loosely. Chances are that it will never have 100% knowledge on what I ramble on about (some nights, I don't either though) and won't be able to tell me if my sentences make more sense than I do. I can't get feedback on whether the character is acting like a jerk (as Tash so often complains (cheers, Tash!)) or on the predictable plot twists, bombshells and cliffhangers I throw. The most it'd do would probably be telling me I misspelled/mistyped a various array of words. Not only that, but it doesn't care about my sentence structure, just average sentence and word length. I could write out a nonsensical string of words like “hippopotamus automotive conjuncture philanthropist anthropology technicalities divisional approximation vicissitudes ect.” and it’d probably accept it as a sign of intelligence (Google Docs actually did. It’s currently only complaining about my use of ‘ect.’)

Then, the argument that anyone involved in these politics will use. "This grading system is basically synonymous with savings! Think of the taxpayer money you'll save! We can finally put that funding into things better than unnecessary expending a footpath that the public didn't need!" (I say this because refuse to believe Johnsonville needed a better reason to crash cars.) There's the fact it saves our teacher's time, where those hours could go towards watching the Flash, or singing about/falling in love with an American city (I say this because I'm now constantly asked if I'll go watch BOM). This time saved also equals money saved. However, it doesn't equal lower tax rates. We're still funding what the public will later dispute and label as idiotic and only valued by Parliament, and it’d begin at this marking scheme.

Benjamin Broadbent said...

I will, however, query whether this electronic marking requires a typed essay. If you're anything like me, and have been subjected to an incapability of reading your own handwriting, then I doubt a computer will. Most of my current or former teachers will probably agree that mine is illegible to the point where I could rival doctors, so what would AI make out of that scribble? Then, on the flip-side, do we really want to sacrifice writing for typing? Within our school, year 9s are already being taught through the use of devices, where the need for spelling and handwriting fly out the window. No longer do they have a need to learn the difference between “defiantly” and “definitely”, when autocorrect is there to be (un)reliable. In 20 years time, children may no longer be learning how to write the alphabet, only what keys to press.

Lastly, I want to talk about the idea behind this. In a failed attempt at modesty (I can't be humble and state my argument at the same time), I want to say, for my own self, that this would not give a just representation of my literate capacity and capabilities. Yes, my vocabulary is worth expanding, as it is mediocre at best. Ask me what "idiosyncrasy" means, and the best I can do is tell you it was a spelling word. The words I do use, most are of a disappointing length. Still, with human markers, they do not deem this to be a particularly important fact, judging on what I think, rather than what I say. What happens when it is reversed? Proverbs tell me that "less is more", teachers tell me to condense my work, and don't write a novel that could rival "Harry Potter and the Order of the Phoenix" in my exams. What happens if all these goes out the window, and the lessons I and my peers spent years learning all become irrelevant? Where our world is based off whether I can expand "big" into "gigantic". The system implemented in by New Zealand may work differently, but by the barest minimum. No longer would we write for a conscious audience. For multi-choice questions, or subjects such as math, where 1 + 1 is supposed to be 2, I see computer grading as no issue. It's all objective; it's right or it's wrong. But if all gets handed over to the technology... it's write or wrong.

Benjamin Broadbent said...

“Kitty couldn’t fall asleep for a long time. Her nerves were strained as two tight strings, and even a glass of hot wine, that Vronsky made her drink, did not help her. Lying in bed she kept going over and over that monstrous scene at the meadow.”
niranjannewlands Jul 29, 2015
(That paragraph was written by a computer.)

This post is written more as a response to this post than as a standalone argument. In my opinion, this essay is well-written, and the author is well-versed in rhetoric, but it lacks one vital element: facts. This essay is, shall I say, a first-figure enthymeme with a suppressed major.

The author's first point is a 'personal incredulity' fallacy (take a look at this: https://yourlogicalfallacyis.com/personal-incredulity). The "Are we really supposed to believe that a machine can do just as good a job as a human being at a task like reading?" quote is exceptional in eloquence but a defeat in logic. Are we really supposed to believe that a fish turned into a human because stuff happened? Are we really supposed to believe that there was a massive explosion that created little things turning into more little things turning into everything?

The marking schedule that the author has conceived is both flawed and fictitious - any computer scientist with an ounce of intelligence will know the difference between correlation and causation and attempt to program a computer that endeavors to find causation over correlation. They will - and should also try to utilise a better marking schedule than the shoddy one that the author has put forth.

The author's next point is his weakest overall - he argues against computers by attacking human markers. The author says that computers will have the same biases as the computer's trainers - but if you think about it, this point is contradictory and ends up weakening their entire argument. What do you think would have less biases: one human or a computer trained by multiple humans?

The author then talks about how a clever student could deceive the formula that the computer utilises. However, the problem is that human markers also have a formula that they use for marking essays. This is called an 'Assessment Rubric', and most English teachers are familiar with using them. It is not very difficult to use this rubric to please the teacher than to make an exceptional essay. Yes, these formulae are far more refined than the computer's formulas, but with time, we can develop them to reach this level.

Also, it is not very difficult for a computer to provide feedback to students after marking their essays: if computers can write stories like the aforementioned paragraph, then it is not very challenging to design a machine that locates areas with lower scores and writes a paragraph that explains how the student could receive a higher score in that area.

Les Perelman’s argument is utter nonsense. Cohen’s Kappa has no discrimination against an autonomous reader’s score and an artificially generated score. In fact, my entire counter-argument is based on the fact that the two can be made the same thing.

Benjamin Broadbent said...

The next few arguments that the author makes are based on a superiority complex - maybe even patriotism. There does not necessarily not need to be something inherently unique about being human. The author is using an anti-change mindset to shield against the future. Just because something completely destroys our worldview does not mean it is not valid, as science has proven countless times.

The author has also talked about human qualities that a computer cannot replicate: intuition, aesthetic judgement, emotional awareness and self-expression. The problem with this argument? It is possible to create computers with these qualities. Artificial Intuition is a field of Artificial Intelligence, and there are some people who think Artificial Intuition should be the preferred technique of choice over Logic. Computational Aesthetics is another field that is being developed. So are Artificial Emotional Intelligence and Artificial Self-Awareness. When these grow into an acceptable level, robots will have most qualities that a normal essay marker will have - in fact, it will have the qualities that a normal human has.

To conclude, this means that yes, machines can engage on a level that humans can participate in and appreciate. Even though we do not have enough development in artificial intelligence, we will, sooner or later, develop life with as much consciousness as us.

Benjamin Broadbent said...

seagull poop
EmilyHollis Jul 29, 2015
For the sake of humanity, for the sake of every instance of happiness ever felt by anybody that was true and completely integral I use every disagreeing crevice of my body and soul to entirely disagree on artificially intelligent beings marking my work. The overwhelming stream of thoughts surrounding this topic invoke me to say this; they are not alive. I cannot begin to fathom the state of those considering the use of computers to mark essays, in my mind I see this as more nauseating a crime than any murder. I am totally aware of money and its worth and its ugliness and how us power hungry, lustful beings will do anything and everything under its influence; I understand. That is our nature. But to place the joy of being alive, to watch the world soar around and across us in blooming vibrant colours and to feel, to memorise and to quake in emotion as it filters so swiftly through our eyes, below computers doesn't make you any more human than a robot itself. I despise every implication of it.

Essays are communication. We speak, we allow ourselves to communicate and create and blossom. The quotation 'Bag of Words' should immediately invoke distrust in the situation; that is exactly what it is. A bundle of nonsense. From the very beginning of time we as humans have thought and used philosophy to create concepts, in many forms, but my favourite is words. Words have this power similar or greater than an atom; enormous, stored compactly within the confines of a minuscule capsule. They boom. They echo, and they are the fundamental elements upon which we build castles or destroy infinities. They are priceless, and I would never ever let anybody take them away from me. Especially not a computer. Especially when we weave this tapestry of vibrancy and light contrasted and just flourishing in empowerment, words and sentences and paragraphs. Why should we give that away? Why should we ever be so soulless and breathless to undertake such an act of pure destruction? We are beings of light and glory and memory and without the words to counterpart them we are nothing. If we choose computers, we choose to disregard everything we've done, and will ever do. Without this depth of spirit, without the memories of joy and death and ecstasy and enigma scarring us like imprints on sand in our remembrance whilst we attempt to communicate our minds as our trademark and our souls we are nothing. If humanity has such capacity to love, we are human. How can a computer ever come to replicate a whole other series of moments such as us to relate and belittle yet understand our perspective in order to give us a mark?

Benjamin Broadbent said...

Aside from words we get memory. The memory seen in 'The Giver' as well as our own personalised set of heartbreak and joy we experience daily. From it pain derives from pleasure, and pleasure can only be felt in contrast to pain. Interwoven, and interlinked is the main message Lois Lowry conveys. Yet this intricate dialogue of scenarios we encounter is what we become and how we thrive; our emotions to the event and how it topples our perspective ever so slightly. Being fourteen doesn't really account for much, but I'd like to think my life has given me something to care for and something to despise (coughRUNNINGcough) and I, selfishly, expect someone to read my essay and to think about it. But isn't that our purpose? To convey messages on levels computers cannot and shouldn't even be considered to fathom through our intuition and knowledge? Perhaps I am part of a generation of internet obsessed people who, by the day are becoming more and more similar (which I want to edge away from) but mashing together a series of long, extravagant sentences supported with alluring punctuation which computers adore is an insult to the creation of words and something, if it happens, I will be fleeing the country to distance myself from.

Really, its just our ability to love which sets us aside from AI and animals. It is from love we derive knowledge, create artistic masterpieces and spark motivation in ourselves and those around us. It is love we thrive upon, and eat and breathe it without its evident or obvious presence. Passion for what we do and our love of words to communicate is something beautifully imperfect in my eyes. Although we could go on eternally over the idealism of love and its flaws it is fair to say, I should think, that a tired, grumpy marker ready to give anyone a Not Achieved who fails to enter the realms of depth and perception is better than a delightfully falsely cheery computer ready to fail someone for forgetting their last full stop