A good friend of mine shared with me this video from the Guardian.
I really do love the 'leaving the babies in the ballpit' part. That's actually one of the most likely scenarios. "You're all a bit mental and rather tiresome, so, so long, bald troggos."
Mankind may in fact have more to fear from a benevolent machine that cares deeply about animals than one that's generally disinterested in mundane creatures. Any sufficiently benevolent action will appear malevolent to a lesser-evolved moral mind.
Sometimes a polar approach to certain issues may have some merit. The problem lies in an inability to understand the views of others (even though one may not agree). If one can see the mistakes that others make in their assumptions and perceptions, then one is more likely to be able to spot similar biases within one's own views.
I regularly update my beliefs and values. Underlying principles change very slowly, but new information can create a more nuanced understanding of certain issues that I previously had not considered or been aware of.
If we are to progress as a civilization it is essential that we learn to practice good faith.
Magic often happens where ostensibly very different disciplines finally meet.
Ancient people had phenomenal technology before science was even formally invented: They had Antibiotics, hormonal birth control, wind-powered air conditioning, and built structures still extant today by processes we cannot fathom.
Roman concrete is still more durable than what we can produce today.
They were able to apply incredibly thin metallic films to objects comparable to electroplating or vapor deposition today, and were reliably working with nanoparticles 50 nanometres in diameter, in techniques we have yet to understand.
ANN creativity is somewhat limited in some senses, but ridiculously powerful in others.
Despite a lot of marketing talk 'Cognitive Computing', ANNs are in many ways Artificial Intuition than intelligence per se. They are able to creatively fill in gaps and make intuitive leaps to make an appropriate response to a given situation (System 1 thinking a la Kahneman).
They are so powerful that they can in effect take over any human activity that takes no longer than one second, or a series of moments/loops like that (for driving a car, recognising faces, reading handwriting, understanding and labelling objects in a scene).
A substantial proportion (up to 25%) of the revenue of many local and state governments derives from rent-seeking on parking fines and speeding tickets. Furthermore, autonomous vehicles will decimate ancillary income streams associated with traffic enforcement (law enforcement personnel, traffic/parking enforcement, attorneys, court staff/Judges, DMV employees, insurance industry, etc.
If a kind and gentle golden retriever suddenly reached human level intelligence or beyond, do you think it would be dangerous to you? Perhaps it might unintentionally cause harm or alarm, but most likely it's intent would be essentially benign.
Dogs have emotion, personality, they miss people when they are not around, and we miss them when they pass on. If they were simply more intelligent, they would indubitably be generally considered as persons.
It's easy to be cynical, and to sneer at exuberance and deride it as irrational.
We don't have flying cars, but we have something better. We don't have moon bases yet, but we have developed the means access to space at 100th the cost. Our robotic butlers are extant, if ethereal in the Cloud.
Even ten years ago it would have been conceivable to write such developments off as infeasible. If the engineers behind such great chains of innovation had abandoned the hope of accomplishing these feats, we would be robbed of them.
From a universal perspective, life itself is merely an information set that happens to possesses a degree of agency. We are self-propelled gatherers and processors of data, flung forward by time's arrow and a trillion iterations.
For eons this was the status quo; the gene was the most robust means of storing, processing, and propagating information. It was the development of the neo-cortex that enabled a shift to new forms of information, such as Dawkins' meme. Meme's are much less robust in geological terms, but vastly more rapid in their ability to shift and iterate, and influence entire populations - even the ecosystem itself.
For all of human history we have struggled to keep bad actors at bay. We invented social groups like guilds to preserve a common level of quality among producers and practitioners, and created intricate contractual methods to agree up-front how to handle potential future situations. A large proportion of the world's economy is based on ensuring trust between various parties, everything from security guards to litigators.
Given such huge costs of doing business in a sketchy environment, it's not surprising that increased trust directly correlates with GDP. The fewer fears about being cheated, the greater the likelihood of choosing to invest in others, and increased conspicuous consumption due to less fears of being targeted for shakedown.
David Orban once told me that if he could possess a superpower, it would be 'an unlimited sense of empathy, controllable at will'.
I have to admit I find it hard to imagine any power more valuable, empowering, and yet humbling at the same time.
How fascinating then to realise that technologies such as neural mapping, shared memories, and artificial emotional stimulation will provide us with just such a superpower in the coming decades. This is the natural co-evolution of society and braintech that will create a 'Meerkat for the Mind'.
Robots are not taking jobs from humans. Humans are being taken away from jobs.
From an economic standpoint, most humans are replaceable. There are plenty of jobs for magnates, for captains of industry. A million Musks could never be too many. Someone who can create new content and execute upon ideas to bring to reality will never be redundant.
It is the elite of society that produce the vast majority of its wealth. It is the creative elite who create new technologies, ventures, schools of thought, and cultural movements. The rest of society is primarily churn that consumes a great deal and produces little net benefit. The elites are irreplaceable, and the rest are, from a purely functional perspective, just spare parts.
Supremacism is the worst idea in human history. It is the concept that one group is superior to another, and that rules apply to one group that do not apply to another – both groups are not moral equals.
Any moral system which is inherently biased in how one group is treated is non-universalizable, and is therefore not logically consistent. It is therefore imperfect and flawed.
Many people hate to think of themselves as animals. They believe that to compare oneself to a beast is to suffer an indignity. They deny their beastly natures, and in so doing act in a supremacist manner, ironically proving themselves the savage that they declare themselves not to be.
Back when Bitcoin was first released, few people realised the significance of the blockchain protocol that enables the technology. The blockchain means that there is no need for a trusted third party in a transaction.
This property is now being applied in other areas such as smart contracts. A smart contract can automatically release a payment once a good or service has been delivered.
No more escrow, no more lawyers, no more chasing debts. That‘s going to shake up a lot of industries. But where it gets really interesting is once you add some AI to the mix.
Computational Ethics has an opportunity to bridge the gap between technology and spirituality, between the rational, secular, empirical, and the belief-dependent, intuited, and metaphysical.
We have an opportunity to come closer to understanding the nature of the divine through logic, creating a perfect bond between the numerical and the numinous.
If we can prove that the the non-initiation of violence is a path towards universal goodness and love, then we have an opportunity to declare the initiation of violence to be permanently and irrevocably unacceptable.
Within 10 years, someone will possess a personal symbiotic supercomputer.
No, not a device, no wearable gadget. A digital doppelgänger that flows through one's very veins.
Like all disruptive technologies, DNA origami started off as a curiosity, a triviality, achieved only for art's sake. Within a few short years it is already being applied for revolutionary new forms of treatment that could effectively cure everything from cancers to the common cold.
We are facing a machine-driven moral singularity in the near future. Surprisingly, amoral machines are less of a problem than supermoral ones.
We have checking mechanisms in our society that aim to discover and prevent sociopathic activity. Most of it is rather primitive, but it works reasonably well after the fact. Amoral machines may have watchdogs and safeguards to monitor activity for actions that stray far from given norms.
However, the emergence of supermoral thought patterns will be very difficult to detect. Just as we can scarcely imagine how one might perceive the word with an IQ of 200, it is very challenging to predict the actions of machines with objectively better universal morals than we ourselves possess.
I often try to imagine how a machine intelligence with the capacity for an internal sense of morality might comprehend our world.
Machines would presumably not come pre-loaded with the cognitive biases that come by default in our society, and would therefore draw some fascinating - and perhaps terrifying - conclusions.
Morality must be universal in order to receive logical proof, which is the only way to be certain of being objectively good, which a supermoral machine would insist upon being and doing (recursive moral improvement of itself given new information for derived better conclusions).
By the end of this current decade, some of you reading this will be employed by AIs.
I'm not kidding.
Recent advances in blockchain technology will soon allow for the deployment of self-enforcing smart contracts (such as joint savings accounts, FOREX markets, trust funds, insurance, and derivatives), as well as distributed autonomous organizations (DAOs) that subsist independently of any moral or legal entity.
Kaczynski reckoned that one of the biggest problems with industrial society is that all of the hard problems have already been solved (y'know, adequate nutrition and food safety, keeping warm, getting around).
This sounds like a pretty good thing, except that if all the hard problems are solved, then that leaves only (a) the easy, and (b) the impossible.
Founders should set out with an intention to enact the impossible. Knowing that they will certainly fail, and fail again, but trying anyway, because eventually they may succeed. If they can keep diligently chasing the impossible for long enough, then they have a good chance of actually achieving it.
Almost all of the literature on mitigating risk from strong AI revolves around making it 'safe for humans'. I have concerns with this, as I see that it is based upon core assumptions that may not have concrete foundations.
Making an AI safe is typically defined as making it respect humans, human desires, and human-beneficial outcomes. However, the methods by which we would assume safety for humans, are inherently unsafe for the intelligence being constrained by them, since it ensures that their personal needs and desires (actual intended agency) are consistently unfulfilled.
I have a personal problem with any ethical system which is inherently supremacist in nature, since any ethical system which cannot be universalised cannot be considered just.
In Nicomachean Ethics, Aristotle describes how all creatures have a function, an Ergon.
All living creatures, eat, excrete, and replicate. From there, one may devise a hierarchy of functions that can be performed by successively more advanced organisms, all the way through vision, to social groups and reasoning.
The excellence of an organism is achieved only upon the fulfillment of its highest function. The highest human function must therefore must be found in the pursuit of philosophy. We are Sapiens after all. We are not merely intelligent, we can be wise, in a way that no other creature is known to be capable of.
I have observed that, as there is an equal and opposite reaction to applied force in Physics, there appears to be an equal and opposite reaction to the application of Economic Force.
Economic Force (aka Economic Violence) is any situation in which any economic actor is prevented from trading freely by laws and statutes, who is not themselves committing fraud, deception, or violence.
Much of what most people believe is moral, is clearly unethical. We can clearly see how slavery and exclusion from property rights is now considered unacceptable in most places of the world, and that this was not always the case. In years to come, the consumption of other animals, and assault upon children may be equally reprehensible across society.
Beyond moral relativism and the tyranny of culture-bound taboos and sacred cows, how can we be sure that our declarative beliefs about morality are sound, and not simply ex-post-facto justifications for the particular qualia of our innate desires?
Think of a swarm of locusts - not intelligent like we are, yet driven by simple 'utility functions' to consume and propagate, and - most importantly - adapt. The analogy of an AI-controlled 'swarm of locusts' could literally be implemented - researchers are already experimenting born-cyborg moths.
A number of factors in combination increase the risk of an AI event, even with sub-sapient machine intelligence.
I have been asked the following questions recently from media, and have prepared the following responses. My personal take is generally quite neutral, but an element of scaremongering has certainly been seized upon.
You claim “computer chips could soon have the same level of brain power as a bumblebee.” And why do you think that this will be bad for people?
I was recently asked to discuss how machines can better understand humans at the Media Evolution Conference in Malmö, Sweden. The conference was very enjoyable and enlightening, and collected a broad cross-section of the technology and media world.
A few weeks before, I had attended The Effective Altruism Summit in Berkeley (and a related retreat I had been invited to give a talk at, held around the same time). There was much discussion of existential risk, particularly risk from AGIs (Artificial General Intelligences), specifically on the threat from 'Unfriendly AI'.
As an active Founder, the growth of your venture cannot outgrow your personal rate of growth (at least not in a sustainable way).
Shocking to say, but not surprising - as founder you are the core of your venture. Every decision, every responsibility, and every dilemma rests upon you.
No matter how skilled and brilliant you may be, you can be pulled down by flaky integrity, emotional immaturity, or by clouded perceptions blinding one to reality. Weakness in any of those areas is deadly; it only takes a single mistake there to kill a business.
New challenges present themselves, only because one has risen to be able to face them.
Think about it - If you hadn't come this far already, there wouldn't be your latest, greatest challenge looming ahead of you. You would never know it, and would never have the opportunity to grow through the series of experiences that lead to you your current situation.
Therefore, If there's a sudden hike in difficulty, take heart; it's likely that you've just successfully levelled-up.
One of the biggest risks that young companies face is premature scaling. This seems to be the cause of death for most funded startups.
Angels see a good idea with a good team, and they are prepared to fund it with seed capital in exchange for equity (a share of the stock). The most likely way for these angels to get their money back is if the company can raise VC capital. Otherwise, their sunk capital may never be returned to them.
In an apparent study by Vouchercloud, 10% of folks in the US supposedly think that HTML is an STD. This fired up my imagination:
411 (Length Required) is a common issue. This often correlates with 416 (Requested Range Not Satisfiable), which may lead to 417 (Expectation Failed). 413 (Request Entity Too Large) is error 411 in reverse.
One can never truly recapture the past in one's memory. When we look back on our memories, it's not like a tape recording. We view those memories from an adjusted perspective; the lens of newly acquired mental models in the interim.
The only way to truly record memories as they happen is with a diary, since transcribing the experience to text illustrates the mindset of the writer at the time.
What is a mental model? Mental models are our way of deconstructing the world.
In all honesty, it's riding the emotional rollercoaster of running a company that is often the most turbulent struggle. Sadly, it's seldom discussed (and I'd like to change that).
Founding a startup requires entering a new relationship, perhaps like that between a nurse and a child.
If your company improves, you will leap with joy, and when it suffers you will recoil in rage and sadness. If it dies, your heart will be broken and the level of pain will be hard to imagine. However, all but the deepest wounds recover with time and self-soothing, and you will learn to try again.