“Self-driving cars are here”
Collective human institutions and wisely educated humans have been proven historically to find ways to sufficiently keep on check their “black box” sides, their “drunken, stung, monkey mind”: Institutions have constitutions, when well done; while single humans has conscience, when well honed in reflection and meditation.
The same black box problem arises in the main “runtime environment” of ever more powerful AIs. We should try to replicate proven “architectures of individual and collective human wisdom”. Such architecture have allowed, in the many good case scenarios, to tackle and even enjoy runaway impredictability, while properly minimizing the suffering.
E se la soluzione per l’Intelligenza Artificiale fosse l’Intelligenza Collettiva?! Chissa che la chiave del successo – nel controllo umano, allineamento ai valori umanità, e redistribuzione dei benefici – dell’Intelligenza Artificiale futura è la governance delle organizzazioni umane criticalmente coinvolte formalmente (proprietà, ricerca, regulamentazione) e informalmente (hacking). Una Singularity AI positiva per la larga maggioranza degli umani, infatti, sarà niente altro che il prodotto collaterale (“by-product”) di efficaci forme di Intelligenza Collettiva transnazionale – ovvero di governance transnazionale altamente democratica e competente – in ambiti critici, a partire da: mainstream media control, cybersecurity, AI. Alla fine il problema del futuro di AI, e quindi della razza umana, non è un problema tecnologico, ma esclusivamente un problema di governance che vada a produrre e regolare tecnologie e standard di AI che realizzino le enormi opportunità della AI invece dei suoi peggiori incubi.
ENISA Threat Landscape 2016 report: cyber-threats becoming top priority
If IT security boils down to that of the weakest “link”, then what is the sense arbitrarily limiting the scope of the annual EU ENISA Cyber Threat Reports by exclude some “links” such as the entire supply chain. Why is that?!
US Defense Science Board said already in 2005: “Trust cannot be added to integrated circuits after fabrication”.
In the Thematic Landscape Hardware document, vulnerabilities introduced during the fabrication and supply chain are out of scope. They say: “According to the scoping performed in Section 1.1, supply chain security aspects which are relevant for invasive/integrated modification of hardware was not in scope of the good practice research.”
The CEO of Google DeepMind is worried that tech giants won’t work together at the time of the intelligence explosion
Absolute must read for those that think AI Superintelligence explosion may be just science fiction or decades away. Even the largest groups that have huge economic interests in downplaying the risks of AI, are now clearly spelling out the risks and the hinting in the right direction. Conforting.
ABS News come out with this article:
In assessing the veracity and completeness of what the head of NSA Tao says here we should consider it’s important part of its agency mission that hundreds of thousands of potential mid-to-high targets, legit or not, overestimate the manual (as opposed to semi automated) resources and efforts they need to devote per person to continuously compromise them. See NSA FoxAcid and Turbine programs.
So these targets will think NSA “endpoint target list” is small and does not include them, and therefore are fine with end-2-end encryption, and merely moderate or high assurance endpoints, like Tor/Tail, Signal on iOS, or an high end cryptophone.
said Christian Byrnes, managing vice president at Gartner. “An inflection point in business and technological innovation has occurred, which we refer to as the ‘digital explosion’ and the ‘race to the edge.'”
Today, there was an islamic terrorist massacre in Paris.
Aside from madness, what could the Paris Massacre terrorists, and those that support or strategize behind them, possibly have aimed to achieve?!
It can only be an increase of fear and hate among innocent civilian of 2 different religious faiths and cultures, that would lead to more war in Islamic states, and then to the coming to power of more fanatic irrational regimes that claim to represent true Islamic faith.
But more war in islamic states, with “collateral” massacres and injustices towards millions of islamic civilians, is unfortunately a goal that – for Cristian religious fanaticism and hate, political misjudgment or huge economic interests – has also been very actively promoted by some western private (oil and defense contractors) and governmental actors.
We can’t fight one without the other.
For my master thesis in Public Policy and Regional Planning at Rutgers University in 2000, I defined in fine detail an ethical vision I had in 1998 that convinced me to pursue that Master in that school: the technical, political and conceptual business plan for a LINEAR CITY (1.0), i.e. a large-scale intermodal urban corridor RE-development, heavily centered on public transport and light electric vehicles, to make cities social, human and ecologically sound. I even had full 3D animations done by myself with amazing detail:
WELCOME TO LINEAR CITY 2.0
Fifteen years later – given all the advances in self-driving vehicles, and the fact that Linearcity that it will still take many years before they are authorized on the streets, and decades before they reach majority of cars – my Linearity concept could be amended by substituting all feeder systems to the main subway/train – which are in version 1.0 a mix of mixed-grade bus and automated guided buses (i.e. with driver!) – with pure self-driving small buses, but on a mix of separate-grade and mixed-grade. In some case, separate-grade may just be a preferential line well-marked on the asphalt, and sidewalk pedestrian warning, without physical separation.
Last July 2015, the Italian parliament approved, through a motion, an Italian Internet “Bill of Rights”. We greatly admire and support the motives of the drafters, many of which are friends, but we believe it necessary to highlight some serious shortcomings to its approach, starting with its Preamble.
It has fostered the development of a more open and free society.
This is very arguable. A large majority of digital rights activists and IT security and privacy experts would disagree that, overall, it has.
The European Union is currently the world region with the greatest constitutional protection of personal data, which is explicitly enshrined in Article 8 of the EU Charter of Fundamental Rights.
This is correct, although Switzerland may be better in some regards.Nevertheless, even such standards to date have not at all been able to stop widespread illegal and/or inconstitutional EU states bulk surveillance, until Snowden and Max Schrems came along. Furthermore, even if the US and EU states fully adhered to EU standards, it would significantly improve assurance for passive bulk surveillance, but it would do almost nothing for highly scalable targeted endpoint surveillance (NSA FoxAcid, Turbine, hacking Team, etc), against of tens and hundreds of thousands of high-value targets, such as activists, parliamentarians, reporters, etc.
Preserving these rights is crucial to ensuring the democratic functioning of institutions and avoiding the predominance of public and private powers that may lead to a society of surveillance, control and social selection.
“May” lead?! There is a ton of evidence available for the last 2 years that to a large extent we have been living for many years in a “society of surveillance, control and social selection.”
Internet … it is a vital tool for promoting individual and collective participation in democratic processes as well as substantive equality
Since it has emerged to be overwhelmingly a tool of undemocratic social control, it would be more correct to refer to its potential to “promoting individual and collective participation in democratic processes”, rather than a current actual fact.
The principles underpinning this Declaration also take account of the function of the Internet as an economic space that enables innovation, fair competition and growth in a democratic context.
By framing this at the end of the preamble, it makes it appear that privacy and civil rights needs are obstacles to innovation, fair competition and growth, which is not the case, as the Global Privacy as Innovation Network has been clearly arguing for over 2 years.
A Declaration of Internet Rights is crucial to laying the constitutional foundation for supranational principles and rights.
First, there have been about 80 Internet Bill of Rights approved by various stakeholders, including national legislative bodies. Second, a “declaration of rights” can very well be just smoke in the eyes, if those rights are not defined clearly enough and meaningful democratic enforcement is also enacted. There are really no steps towards proper “Supranational principles and rights”, and related enforcement mechanism, except a number of nations bindingly agreeing to them, similarly to the process that lead to creation of the International Criminal Court.
Richard Hawking, the great physicist, sees in the future of humanity like no one else. He sees our greatest risks related to the future of self-improving AI machines:
(1) Human exinction, if AI machines can be controlled at all. He said “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all”.
(2) Huge wealth [and power] gaps, if AI machine owners will allow a fair distribution once these will take on all human labor. He said “If machines produce everything we need, the outcome will depend on how things are distributed.” Hawking continued, “Everyone can enjoy a life of luxurious leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine-owners successfully lobby against wealth redistribution. So far, the trend seems to be toward the second option, with technology driving ever-increasing inequality.”
In this youtube video excerpt (minute 8.33-15.55) from Panel 2 of the Free and Safe in Cyberspace conference, that I organized 2 weeks ago, in which Richard Stallman and myself debate about IT trustworthiness and free software. The entire panel video is also available in WebM format here.
In such excerpt, Richard Stallman said that computing trustworthiness is a “practical advantage or convenience” and not a requirement for computing freedom. I opposed to that a vision by which the lack of meaningful trustworthiness turns inevitably the other four software freedoms into a disutility to their users, and to people with whom they share code. I suggest that this realization should somehow be “codified” as a 5th freedom, or at least very widely acknowledged within the free software movement.
… without introducing any undemocratic bias:
Introduce contextual ads made exclusively of product/service comparisons made by democratically-controlled consumer organizations. In Italy for example there is Altroconsumo org with 100s of thousands of members which regularly produces extensive comparative reports.
In practice: for each new report that comes out, a request is made to the companies producing the product/service in the top 30% to sponsor it publishing inside Wikimedia portals.
Such formula could be extended to Wikimedia video, generating huge funds, arguably without any. Proceed are shared among Wikimedia and the consumer org.
(originally written in 2011, and sent to Jimmy Whale, which found it interesting)
There are many ways to try to prevent catastrophic AI developments by actively getting involved as a researcher, political activist or entrepreneur. In fact, I am trying to do my part as a Executive Director of the Open Media Cluster.
But maybe the best thing we can do to help reduce chances of the catastrophic risks of artificial super-intelligence explosion (and other existential risks) become a “Unabomber with flowers“.
By that I mean, we could hide out in the woods, as the Unabomber did, to live in modern off-grid eco-villages somewhere. But, instead of sending bombs to those most irresponsibly advancing general Artificial Intelligence, we’d send them flowers, letters and fresh produce, and invitations for a free travel in the woods.
Here’s what the wrote in the Unabomber wrote in his manifesto “Industrial Society and Its Future”, published by the New York Times in 1995:
173. If the machines are permitted to make all their own decisions, we can’t make any conjectures as to the results, because it is impossible to guess how such machines might behave. We only point out that the fate of the human race would be at the mercy of the machines. It might be argued that the human race would never be foolish enough to hand over all the power to the machines. But we are suggesting neither that the human race would voluntarily turn power over to the machines nor that the machines would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines decisions. As society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decision for them, simply because machine-made decisions will bring better result than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won’t be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide.
My wife Vera and my dear friend Beniamino Minnella surely think so.
After what came out of the Hacking Team scandal, we should consider if the standards for such techs, crucial for society – that many governments want extended as mandatory to other IP communications – maybe we have a problem at their origina, i.e. with their international governance by NIST and ETSI, the non-binding bodies that set their standards (which are then mostly updaken by national governments). If we know NIST has broken crucial crypto standards on pressure fom NSA, here is the formal governance of ETSI, which is then deeply participated in its process by industry players :
The just revealed Hacking Team RCS systems backdoor (for them and presumably for their state friends) was the very reason of existence of the first such systems from the early 80-90’s (!!), created by former NSA staff, and then taken over by former (?) Mossad senior agents, and sold to tens of governments worldwide.
Pushed around “presumably” with the key goal of giving Israeli intelligence full info on what other intelligence were up to. US made an illegal copy for itself and pushed that one around to other governments …
Here is the Wikipedia file a long detailed story of it, and Here excerpts from a relatively authoritative book on the history of Mossad “Gideon’s Spies” which I finished reading last Christmas:
From Ars Technica post today. It does make sense in many regards:
Rabe argued that just as the United States and other Western countries routinely sell arms to allied countries like Saudi Arabia, so too should Hacking Team be able to sell its wares as well. After all, he pointed out, more than a dozen of the September 11 hijackers were from that country.
“Do you want Saudi Arabia to be able to track that sort of thing or would you rather have them be able to operate behind contemporary secrecy and the Internet?” he said.
“My point is not really to argue the various dangers of different kinds of equipment but just to say that if you’re going to sell weaponry to a country, it’s a little disingenuous to say that a crime-fighting tool is off-limits.”
Rabe ended the call with a forceful defense of the company’s entire business model, saying that there should be a controlled, appropriate way for governments and law enforcement to breach digital security.
“[CEO David Vincenzetti] started life in what we would call defensive security, to keep people out, and then he realized as more and more of the communications became inaccessible, that there was a need for a tool that gave investigators the opportunity to do surveillance. I don’t think that’s really that hard to understand, frankly. I don’t think any of us are against cryptography, but what we’re against is police being able to catch criminals and prevent crime, that’s what we’re worried about.”
The Panopticon is a type of institutional building designed by the English philosopher and social theorist Jeremy Bentham in the late 18th century. The concept of the design is to allow a single watchman to observe (-opticon) all (pan-) inmates of an institution without the inmates being able to tell whether or not they are being watched. Although it is physically impossible for the single watchman to observe all cells at once, the fact that the inmates cannot know when they are being watched means that all inmates must act as though they are watched at all times, effectively controlling their own behaviour constantly