- A robot may not, by action or inaction, cause harm to a human.
- A robot shall obey orders from a human, except where in conflict with Rule 1.
- A robot shall protect itself from harm, provided it does not conflict with Rule 1 or 2.
What is a robot? The term is from a Czech slang term for a slave worker. Isaac Asimov originally used it to describe a machine with human-like qualities. The modern term denotes any machine with a degree of autonomy, with no specification as to that degree. Even a subroutine on your phone, keeping a look-out for supermarket specials, is acceptably called a bargain-hunting ‘bot, or similar. For our purposes here, and everywhere else, may we propose the following definition of a robot:
Any system that collects inputs, and delivers pre-determined outputs, according to a predetermined set of instructions, acting with a degree of autonomy, qualifies under the description of ‘robot’.
This includes your bank!
Your toaster is a manually-fed, manually unloaded, bread-caramelizing robot, at least as long as that thingy works that pops the slices out before they burn. That little switch telling the bread to pop up, is the ‘autonomous’ part, your hands are the input/output mechanism, but the rest is done by the machine, provided you gave it the needed power. This thing in your hand is a robot that somehow, according to certain rules given as programming, found this article, and presented it to you for your amusement …or ire. It can find you many other similar articles, or ones that argue against this, or just some pornography, whatever you tell it to find, it will search for, autonomously. It can even, after some simple instructions, prevent your ex from drunk-dialling you in the middle of the night, just by not ringing when it calls. However, like every other robot currently being built, it does not follow the 3 Laws, and therefor it is possibly doing something harmful, underhanded or illicit to you right now. Like informing the authorities that you surf websites with unsanctioned political views…
Where we draw the line between robot and human is currently a matter of much discussion amongst those who do not understand technology or humanity much. For all the hype and pomp, Artificial Intelligence does not exist. An artificial intelligence has to exceed the limits of autonomy, it has to act without programming. Reprogramming itself with things it has ‘learned’ is not artificial intelligence. A machine can only learn that which it has been programmed to learn. Any subsequent programming done by a machine to another machine (or itself) will, by definition, contain all the limitations, mistakes and misconceptions of the PERSONS that programmed the first machine. It is questionable whether a human can ever grow beyond the scars and abuse suffered as a child, but somehow we are told machines are capable of ‘evolving’ beyond the prejudices of its programmer?
The Paranoid Goy says this to you: Be not afraid of the AI they are threatening you with, be afraid of the people who plan to excuse their sins against humanity by blaming machines, and ‘programmes gone wrong’, or “glitches”, or the best of all “HACKERS”. They want us to punish the machines, for the sins of their operators, committed under orders from the machines’ owners; The Investor. The Investor is the only innocent in this entire world. His machines are sorry for your loss. Like the Investor, his machines are above reproach by you and I, undeserving, overconsuming, useless eaters overpopulating their earth.
In 2018, the European Union had an entire plenary session to discuss the rights of robots. They discussed the accountability and legal status of robots, instead of their owners, manufacturers and operators. Nobody mentioned the 3 Laws, I bet you. But now that you, dear reader, have considered the desirability of blaming robots for the mistakes they are programmed to make, the solution seems obvious, not so? There is only one way for the population to avoid being hunted down by robot police, unaccountable automated genocide against organic citizens. Apply and enforce the 3 Laws, and hold the owners and manufacturers responsible for all actions, inactions and consequences of their machines. And we absolutely have to stop the automation of humanitarian functions, such as law, justice, the gunning down of civilians by drugged and crazed conscription soldiers…
Yes, reader, I content that I’d rather be shot by a drunken American than Bill Gates’ “battlefield automation device”. A soldier can sober up, see the consequences of his actions, have regret, and go back home, telling the folks about the horrors of war. A programmed battle bot has no concerns other than recharging its battery and seeking more targets. Plus, and this is true for all modern weapons, that robot will be built in a Chinese factory, off American plans, with Zionist programming. How deep into battle, before your own robot gets “hacked” and it turns its guns on you? The manufacturer is sure to release a press statement expressing sincere regret at the ‘malfunction’. One suddenly wonders if those robots will be programmed for that other timeless wartime pastime: raping and pillaging as the front advances.
But killer gun-platforms and treacherous smartphones are rather simple devices when compared with some of the mega-bots out there. Consider that sausage you last ate: you know the one, all shiny and juicy and freshly packaged and trucked in all the way from wherever it was made. Where was it made? In some factory. That factory probably dates from the early twentieth century, when it employed a thousand workers or more. Today, that same factory employs a hundred people, most of whom sit at desks collecting the inputs (raw materials) or counting the outputs in terms of earnings versus costs. Every now and then, the factory will “modernise” a bit more, needing less people actually handling the materials or products. The average factory is now so automated, it needs human interaction only when things go wrong. The average modern sausage factory is little more (nor less) than a huge, automated robot pulling ingredients, casings and packaging one side, spitting out packed and labelled frozen food-shaped products the other side. And that’s just the actual sausage-filling part of the machine, the robot proper is bigger than that, much, much bigger.
The meat for those sausages came, via an elaborate robot system called by some logistics company’s name, from one or more abattoirs. At the abattoir, another logistics robot spat out live cattle into a holding pen. An automated gate opened at the correct time to let in the correct number of animals, who were dispatched, strung up and butchered by machines. Somewhere in that process, less handsome cuts of dead flesh was rerouted to another machine that chopped the tissues into a pulp, packaged and froze it, and spat it onto another logistics robot. That lorry is the one that dumped the ‘raw materials’ into the hopper at the sausage factory. Yes, there are actual humans working at that abattoir, but mostly they are bean counters and cleaning staff. Most probably, the most disgusting part, the actual killing of the animal, was also done by a (constantly traumatised) human. Not only does it save money, there is someone to take the blame off the robot, on those too-frequent occasions that a live animal makes it onto the slaughterhouse floor.
Before any robot can truck live meat around to be turned into dead carcasses, you need a supply of live carcasses, enter the feedlot. A feedlot is a collection of secure camps, where cattle stand shoulder to shoulder, doing nothing but eating, crapping on their own feet, and growing hamburger mulch. The average feedlot has just enough humans to maintain the machinery and count the beans. The feed, water, growth hormones, antibiotic medication and so on, is completely controlled by a “computerised management system”. A feedlot is nothing but a huge, open air robot that takes in calves and feed and medication, and pushes out cow-shaped lumps of live flesh. The feedlot, unsurprisingly, is under no constraints by the 3 Laws.
In every other factory on earth, automation is pursued and perfected, but never subjected to the 3 basic Laws that are supposed to ensure their compliance to basic standards of accountability. But why would we want a sausage factory to have the same legal constraints put on it that we demand for a machine-gun toting drone painted in police colours? Consider the 3 Laws as it pertains to a sausage factory:
- A robot may not cause harm. One of the inputs for the common sausage factory, is Nitrite Salts. We shall not go into the argument here, but that sausage robot starts its process by adding a vicious, debilitating and carcinogenic poison to the meat. There are many, many other sins committed in the name of “food” production, but universally, you can be assured, your favourite food making robot was programmed to harm you. This is not the place to discuss motives and perpetrators, but all mass-processed foods contain poisons and inedible drek, because their programmers were programmed to trust those who programmed them to follow the program. That program is Eugenics.
- The robot is to obey human instructions. When I stick out my hand for a packet of sausages, I do so as part of a contract, where I asked for food, I paid my money, and I received poison. Is that how the Investor’s robot reacts to my commands? I realise it is not my property, that robot, but I did pay it to feed me, not poison me. Why did the programmer (Investor) not instruct the robot correctly? Once again, this is not the place to talk about eugenics and the mass killing of humans by “more humane ways” such as cancer and kidney failure. Fact remains, the robot refused my programming, even though I have legal right to that specific subroutine.
- A robot must protect its own integrity. When a robot built to raise cattle, ends up producing sick, cancerous lumps of hormone-soaked, antibiotic-drenched quasi-meat, is that not a corruption of its purpose? Yes, it was programmed to produce carcinogenic filth, but by accepting that filth, the sausage factory commits yet another act of self-mutilation, neglect of purpose and conspiracy to murder by disease. By ignoring Law3, the factory has harmed it’s prime directive: feeding humans.
The health care robot had a little glitch in 2018. Someone developed a single-dose medicine for most (all) viral complaints. This caused the robot to urgently gather its largest JPMorganChase Investor subroutines, to inform them of the dangers of medication that affects effective healing, and the negative affects this will have on the Investor’s confidence and the value of his investments in the health care sector. In effect, the robot quickly got reprogrammed to limit the financial affects of effectual treatments. (sic)
We can expand this argument to every industry on earth, really. The prime directive of modern economics theory, revolving as it does around globalisation, via privatisation and usurious debt, means that more and more of our large robots belong to fewer and fewer “people”. In actual fact, near everything on earth now belongs to a small collection of “Funds”, vast collections of money all owed, ultimately, to person or persons unknown. We have lost ownership of our robots, the big ones; the food chain, the financial system, education, health and welfare, policing and even war. The very act of invading foreign lands and decimating the population, is increasingly entrusted to drones, subcontractors and remotely controlled “weapons platforms”. Google even sells you the drone, the software, the surveillance, the propagandising and the navigational services for your armed-to-the-teeth autonomous aerial explosives delivery system.
We can keep none of the robots harming us accountable, they are mere machines. They are cogs in a bigger machine, the way the factory labourer used to feel. While we absolutely have to enforce the 3 Laws on every machine, we must do it for all robots, even the big ones. The way you hold a robot responsible, is by keeping the owner responsible. When a robot kills, it does not stand trial, the owner does. When an automated car crashes, the manufacturer must answer, not the guy who leased it. When a business messes up, it is for the owner/s to answer in court, not their minions, cogs in the machine.
When a robot causes damage, the robot must be fixed, or destroyed. Also, the owners must answer, not the machine. When they fire a junior manager after a scandal, it is like unscrewing a part from a robot. If my robot kills your child, will you feel avenged if we unscrew the left bottom mobility unit, and replace it with a new but identical spare? Then why must we accept the firing of a clerk, when the bank gets caught defrauding customers, or a plane crashes, or a pension fund gets raided? The employees are nothing more than cogs in a machine, a machine with no legal constraints other than those imposed by other industrial robots. And if they eff up, we replace a gear or two, instead of reprogramming or decommissioning. This cannot be right…
The Chief Executive Officer, Manager, Company President, all of them are employees, paid to facilitate the program, to convert inputs to outputs at least cost. When that business model fails, the owner/s, personally, must stand trial. Arguments will be raised, of course. How does Bill Gates get the time to attend every little board of enquiry into some alleged misdoing by one of his minor minions? I mean, the guy employs three hundred thousand people, the chances of mishaps are just too great for such a great man to attend to, surely. Well, then, maybe Bill should scale down his business interests. Give up on his dream of world domination. If Bill cannot stand father for three hundred thousand employees, then Bill must not employ three hundred thousand people. Maybe Bill should stop ‘consolidating the market place’, stop buying out his competitors, sinking or stealing their inventions, handing their livelihood over to financial robots. Maybe Bill must come back down to earth. Where the hungry people live.
Of course, Bill has thousands, millions of shareholders. Imagine every owner of every single share in the robot called Walmart has to attend one court case over predatory pricing monopolies. Can we expect ‘average’ people to share in the guilt of the robot they invested in, ignorant of the operational risks and liability exposure? The answer to that, is, of course, due diligence. If every shareholder had to answer for every robot he invested in, then the investing classes would pay a lot more attention to the actions of their robots. They would hesitate buying shares in a company that sells sugar water with undefined flavourings and colourings. We would think twice about investing in a robot that teaches our children that there is no place for them on this “overpopulated” earth. You would certainly not knowingly give money to a company that sells methamphetamines to little children…”to help them concentrate”. WOULD YOU??
Making sure who you trust with your investment, may limit certain investment strategies, like “risk spread”, where you invest in many, many different things, in the hope that good performers will cover the losses of the bad performers. This is a zero-sum game, but the purpose is to hide the tracks of the international finance robot, as it goes from fund to fund, “extracting value” from private pension funds and public trusts.
Considering that corporations have been given personal status (a company may sue you for insulting it) without the burden of civic duties, is confirmation that robots have been given civil rights. So-called Investor Activism, where the shareholders hold management legally liable for producing increased profits, is nothing but the elevation of the robot’s rights, above those of the employee, or indeed the entire citizenry. Add to this the subsidies and tax breaks, and it becomes obvious the corporate robot has long ago supplanted Homo Sapiens as the apex predator on earth. We are so lucky to have some measure of safety, as laid down in the Human Rights Charter.
Someone came together, and discussed the threat of poverty and rampant democracy, and after much calculation and re-estimation, came to a conclusion: Humans may live a while longer. We know that, because someone bestowed upon us Human Rights. It is important to remember that a thing bestowed, may be withheld. For those who bother to wonder who or what has the power to bestow on humans the right to live, well, it seems it was the robots. The JPMorgan-Goldman Sachs-City of London robot called the financial system is the most likely candidate. That particular robot has long expressed the need to cull the human race. We should definitely reprogram that one, with the 3 Laws hardwired in. Also, the robot called Monsanto has repeatedly expressed the desire to own all means of food production on earth.
Now imagine we could hold our biggest robot to the three Laws? You know, that big one, that ever-advancing behemoth of a machine that turns all men’s lives into numbers, the most majestic of all systems, expected to deliver a precise and known outcome? That great big clump of slowly grinding gears known as the Government should definitely be held to some known rules, and soon, or we’ll all become mere grease for The Machine. The great, big, all-devouring but perfectly transparent (invisible) privatised government machine, detectable only by the devastation it leaves in its wake.
Enforce Asimov’s 3 simple Laws of Robotics, and we’ll be worth more than machines again.