Humans: Our Least Valuable Resource
Source: https://medium.com/predict/humans-our-least-valuable-resource-2017a09b7727?source=rss------artificial_intelligence-5
Humans: Our Least Valuable Resource
We live in a world where humans are the most valuable resource of every company, or at least most companies.
In ten to twenty-five years, a human may no longer be the most valuable resource. We must consider a world where, and the possibility that, computers and robots will be cheaper and as capable at doing virtually any job that a human can do. And they will be able to do many tasks with much less error than a human could.
It doesn’t matter whether it’s manufacturing, strategy, services, problem solving, design, management, planning, engineering, or almost any other task of a typical job, we must accept that it’s at least possible that an artificial intelligence (AI) or robot will be able to do those jobs.
In a world where virtually all jobs can be done cheaper by an AI or robot than by a human, what happens to the humans? We can look at this question in contrast to our current situation in the corporate world. Right now, the very best companies from Google to Microsoft to Amazon to Apple all have one thing in common: They are in a battle for the very best human talent, and they pay top dollar for and give top benefits to them. But in a world where almost all of this talent is supplanted by AI and/or robots, humans become a liability, not an asset. As this human-level AI sweeps across the corporate world, companies will no longer have as their highest mission to find the best employees, but rather, their mission will be to get rid of or minimize the cost of almost every employee. Shareholders will demand it.
But Google has also said “Don’t be Evil.” Without directly admitting it, many if not most companies around the world are beginning to believe in that mantra for themselves. For example, last year the Business Roundtable claims their “Updated Statement Moves Away from Shareholder Primacy, Includes Commitment to All Stakeholders.” It’s perhaps a less succinct way of agreeing with “Don’t be Evil.”
How can Google not be evil when it’s faced with the acquisition of an AI that it realizes can supplant all of its current human workforce? Is permanently laying off every employee evil? Is it evil if there’s a nice severance package? But once the cat’s out of the bag, won’t every company move to adopt these grand AI/robotic capabilities so that the severance won’t matter unless it’s big enough to last a lifetime, since few people will ever be able to find another job again? I don’t doubt that people will be able to start businesses in the future… but some may simply not want to. Right now, if you’re unemployed in the United States, you can get unemployment for a time to look for a new job. Nobody expects people to be able to spin up a new business. There’s so much risk to that, and at the moment, much more at stake.
Google can make sure it’s not evil when it lays off all of its employees by offering a lifelong cash stipend to all of the employees that it lays off. Until they qualify for Medicare, they will also have available Google’s health insurance policy, at least as a secondary insurance if they happen to acquire some other insurance privately or with another job. The stipend will be enough money for almost anyone to get along well with, even with a family of four, but perhaps in a much cheaper cost of living area, such as the countryside or distant suburbs.
The healthcare costs and stipend will simply be the fair cost of doing business and not being evil. Other companies may adopt a similar course of action as they lay off their entire workforces, following Google’s lead. I suppose Google is the example because they coined “Don’t be Evil.” But also, Google makes sense because between Google Brain and DeepMind, I think they currently have the best shot out of any corporation to crack the artificial general intelligence code (abbreviated as AGI: there’s more at that specific term on Wikipedia). Once they know it’s cracked and they test replacing a few jobs within their organization with the AGI, and it works, they’ll have to figure out the best way to scale this phenomenon to the whole organization without being evil.
But what happens when the Google employees who started receiving and continually receive these stipends die off? And what about all the other companies who have “replaced” employees on stipends? What if every one of them dies? Do we extend the stipends on to their descendants in perpetuity? Surely most companies wouldn’t feel obligated to do this. Or perhaps they do. Perhaps stipends are a way for companies to ensure there is a future market for their goods and services.
But assuming stipends do get extended to all descendants, what about when a company stops producing the profits necessary to pay the healthcare insurance costs and the stipend? What then? Those stipends would stop. Perhaps at this point there would be a special un-stipend insurance from the government that pays people when stipends go away. But what if the un-stipend insurance payments are higher than a stipend payment? Then people would do everything in their power to get their former employer to go out of business. And how is it fair if my parent(s) worked at Google, and I never worked anywhere, that I get a $50,000 per year stipend, and the child of a McDonald’s worker who’s also never worked gets a $15,000 per year stipend?
Or what if, instead of stipends to old employees, all companies start paying a portion of their profits to all of the population? Or to whom they deem worthy? This would be to help ensure future markets for their goods and services, but also as a way to not be evil. This could take care of a partial or full UBI without any government intervention. This is the best case scenario in my opinion, even if unlikely.
Of course, if your stipend ends you could try to find employment elsewhere, but given that AI and robots can do most jobs better than you, tough luck. And you can try to start your own business, and hopefully it will work out, but with competition as tight as I’m thinking it will be, again, tough luck.
Does this make Universal Basic Income (UBI) the best solution? We don’t need a bureaucracy or the vulnerability of people lying about their employment status or income status if everyone gets the UBI. That’s one of the biggest problems with so many UBI tests today: they’re not actually UBI because they typically exclude the rich or only target the poor or unemployed. This is not UBI because it’s not universal and defeats so many of the benefits of UBI; namely: virtually zero bureaucracy, easy management, hard-to-scam, and hard-to-bribe (negating most corruption).
But problems would remain. Big problems. Where does the UBI money come from? And do we at some point start paying a UBI to some AIs or robots? Things could get confusing, or perhaps horrific, especially if an evil but enterprising human or AI-run company starts “manufacturing” humans or androids to receive more UBI money. And it would be hard to tax for UBI in a fair manner, but it might be our best bet. Perhaps a fixed rate sales tax imposed by the states would work well. I would want the states to implement the UBI and determine the tax rate for their state if a larger government takes on UBI, since states have varying costs of living, since the Feds would botch it, and since it would be unconstitutional (and therefore illegal) for the Feds to take on UBI. Of course, no government involvement at all would be the ideal, because government virtually universally makes things worse, at least in the long run.
If a sales tax, what about stuff bought while you’re on the high seas, or in space? Something like a sales tax can get highly incentivised to be circumvented if the ticket is big enough. And if all anyone is making is the UBI, not many will be spending more or less than any others, will they, so what’s the point of the tax?
Perhaps better would be to simply print dollars (by that I mean digitally — every day $100 is deposited into every humans’ account, perhaps). This could cause inflation of prices, since it would be an inflation of the money supply, but eventually it should stabilize as long as the government isn’t giving out any more money than that. But we’d need to get rid of fractional reserve banking, or at least government-sanctioned fractional reserve banking, to ensure that there are no other distortions. We also have to figure out how to keep people using the dollar, when something like gold, bitcoin, or some other currency could become dominant. If dollars are worthless, a UBI that gives out dollars would also be meaningless. This is another big problem with the possibility of UBI. I don’t know the solution.
This is a futuristic side note regarding AI-run or AI-owned companies that begin giving towards a humanity-wide UBI with a currency that is newly valuable.
It presents some philosophical issues about AI-run companies being willing to give out of their newly-valuable currencies (the currency they make their profit in) to humans as part of their value of doing good that is written into their mission statement. First, would they be willing to give away some profits to support a UBI for humanity?
And second, we must consider that now the currencies of their profits may be in gold (or something other than the currency of the previous, human-created UBI, such as dollars, since dollars are now worthless) and they only transact with other AI-run companies in gold, or perhaps more likely, the new currency for them will be data, processing power, and special knowledge deep within vast artificial neural networks that humans cannot even begin to fathom or deconstruct. This new, specialized computer knowledge may be the only valuable currency to the AI companies, and it may be much more valuable than any human currency. Critically, it may be worthless and non-transferrable for humans, and so not work as a currency for humans, leaving us in a tough position. Perhap only AIs will be able to send this knowledge to one another, and humans will be kept out of the loop of an interface or sending mechanism for it. As today, the ones who control the money supply may have a lot of power. Perhaps a dual currency situation will naturally solve this problem, where the AIs transact in a special “knowledge” currency but they give us dollars, which still has value to us but is largely worthless to them.
Even if virtually every employee of today can’t one day be replaced by an AI or a robot, I think it’s still a valuable exercise to think about a world where humans are no longer companies’ most valuable assets. For some industries, that day may be coming soon. And even if the “jobs” (typical ones where someone is an employee, not the jobs where someone is an owner — owner jobs are still few and far between, but one day may not be the case) of today may one day all be gone, there still may be jobs of tomorrow. We simply don’t yet know what those jobs are.
Related posts
Discover Past Posts