Is meaningful trustworthiness a requirement of Free Software “computing freedom”?

In this youtube video excerpt (minute 8.33-15.55) from Panel 2 of the Free and Safe in Cyberspace conference, that I organized 2 weeks ago, in which Richard Stallman and myself debate about IT trustworthiness and free software. The entire panel video is also available in WebM format here.

In such excerpt, Richard Stallman said that computing trustworthiness is a “practical advantage or convenience” and not a requirement for computing freedom. I opposed to that a vision by which the lack of meaningful trustworthiness turns inevitably the other four software freedoms into a disutility to their users, and to people with whom they share code. I suggest that this realization should somehow be “codified” as a 5th freedom, or at least very widely acknowledged within the free software movement.

Posted in work1, work2 | Leave a comment

How could the US government incentivize IT service providers to voluntarily and adequately provide compliance to lawful access?!

More news on Obama’s search for legislative or regulatory solution to lawful access to digital systems.

For some time now, the US government has been ever more often stating that there will not be a mandatory technical requirements to enable remote state lawful access, but that they expect provider to somehow come up autonomously with solutions that would allow for lawful access when needed by investigating agencies.

But any company that decided to come up with some techncial and organizational processes to do so, even with extremely effective safeguards for both the citizen and the investigating agency, would appear to be, and possibly actually be, less secure than competing services or devices that do not provide such access.

This problem could be solved if the US government provided very solid and reliable incentives to those that do, and do in a proper way, i.e., they comply to a minimum of citizen-accountable extreme safeguards, that guarantee both the user and the agency. The US government could approve some solidly enforceable policies that prescribe much higher personal economic and penal consequences for official of state agencies that are found searching or implanting vulnerabilities ONLY for high-assurance IT service providers that offer socio-technical systems to comply to government request, as certified by an independent international technically-proficient and accountable certification body. Such new policies would instead exclude IT service or device providers that do not.

To get 2 beans with one stone, such international body could also certify IT services and devices that offer meaningfully high-levels of trustworthiness, something that is direly missing today. One such certification body is being promote by the Open Media Cluster (that I lead), with the name of Trustless Computing Certification Initiative.

Posted in work1, work2 | Leave a comment

Le Dimissioni di Marino: “rule of law” contro “o’ Sitema”

Oggi si è dimesso Marino da sindaoc del Comune di Roma.

Nel 1992 con Tangentopoli, l’economia si fermò per 1-2 anni e quasi nessuno andò in galera. Nuovi partiti si formarono che per la quasi totalità continuarono come prima o peggio.

Oggi con Marino succede qualcosa di simile, ma senza nemmeno gli onori della cronaca, con media schierati a batteria su presunti pasti a scrocco del sindaco, invece di parlare dell’assalto strutturale ed ininterrotto per centinaia di milioni di euro alle casse del Comune.

Se ne deve concludere che chiunque provi anche solo a non compromettersi e prestarsi all’ “illegalità diffusa di alto livello orchestrata dalla politica”, nei limiti delle competenze di un amministratore, verrà accusato di “non fare”, pressato attraverso vari ostruzionismi finalizzato al peggioramento dei servizi, e manovre di stampa per far perdere consenso politico.

Si è provato ad eleggere magistrati con grande consenso politico come De Magistris, ma non c’è stato quasi niente da fare; come ha provato a fare qualcosa gli bloccavano i trasporti e la monnezza e lo isolavano con i media.

Si è provato con il professore indipendente dall’America, Marino, ma siccome non si partecipa a quello che a Napoli chiamo “o Sistema”, stesso trattamento. Dicono “non lega con i Romani”, perché tutti i media locali dicono che è un ladro nullafacente ed è semplicemente una persona seria.

L’unico modo di uscirne sarà quando un sindaco verrà eletto con il mandato chiaro e centrale – supportato da un partito che con credibilità rispetto alla sua storia – di ripulire il malaffare di strutturale e in grande scala, e NON le ricevutine dei pranzi. Solo allora i media si allineeranno a spiegare ai romani che se i servizi non funzionano e le casse piangono è per il malaffare e non per chi cerca di contrastarlo. Non so se ci sia già in Italia un partito o una forza politica così …

Posted in personal1, personal2 | Leave a comment

A Proposed Solution to Wikimedia funding problem …

… without introducing any undemocratic bias:

Introduce contextual ads made exclusively of product/service comparisons made by  democratically-controlled consumer organizations. In Italy for example there is Altroconsumo org with 100s of thousands of members which regularly produces extensive comparative reports.

In practice: for each new report that comes out, a request is made to the companies producing the product/service in the top 30% to sponsor it publishing inside Wikimedia portals.
Such formula could be extended to Wikimedia video, generating huge funds, arguably without any. Proceed are shared among Wikimedia and the consumer org.

(originally written in 2011, and sent to Jimmy Whale, which found it interesting)

Posted in work1, work2 | Leave a comment

“f no values-based standards exist for Artificial Intelligence, then the biases of its manufacturers will define our universal code of human ethics. But this should not be their cross to bear alone. It’s time to stop vilifying the AI community and start defining in concert with their creations what the good life means surrounding our consciousness and code.”

http://mashable.com/2015/10/03/ethics-artificial-intelligence/?utm_cid=mash-com-Tw-tech-link

Posted in work2 | Leave a comment

“Unabomber with flowers”. May it be our best option to stave off AI superintelligence explosion?

There are many ways to try to prevent catastrophic AI developments by actively getting involved as a researcher, political activist or entrepreneur. In fact, I am trying to do my part as a Executive Director of the Open Media Cluster.

But maybe the best thing we can do to help reduce chances of the catastrophic risks of artificial super-intelligence explosion (and other existential risks) become a “Unabomber with flowers“.

By that I mean, we could hide out in the woods, as the Unabomber did, to live in modern off-grid eco-villages somewhere. But, instead of sending bombs to those most irresponsibly advancing general Artificial Intelligence, we’d send them flowers, letters and fresh produce, and invitations for a free travel in the woods.

Here’s what the  wrote in the Unabomber wrote in his manifesto “Industrial Society and Its Future”, published by the New York Times in 1995:  

173. If the machines are permitted to make all their own decisions, we can’t make any conjectures as to the results, because it is impossible to guess how such machines might behave. We only point out that the fate of the human race would be at the mercy of the machines. It might be argued that the human race would never be foolish enough to hand over all the power to the machines. But we are suggesting neither that the human race would voluntarily turn power over to the machines nor that the machines would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines decisions. As society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decision for them, simply because machine-made decisions will bring better result than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won’t be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide.

My wife Vera and my dear friend Beniamino Minnella surely think so.

Posted in personal1, work1 | Leave a comment

IT security research needs for artificial intelligence and machine super-intelligence

(originally appeared on Open Media Cluster website on July 7th 2015)

On Jan 23rd 2015, nearly the entire “who’s who” of artificial intelligence, including the leading researchers, research centers, companies, IT entrepreneurs – in addition to what are possibly the leading world scientists and IT entrepeneurs – have signed Open Letter Research priorities for robust and beneficial artificial intelligence with an attached detailed paper (we’ll refer to both below as “Open Letter”).

In this post, we’ll look at such Open Letter and ways in which its R&D priorities in the areas of IT security may crucially need to be corrected, and “enhanced” in future version.

We’ll also look at the possibility that short-term and long-term R&D needs of artificial intelligence “(“AI”) and information technology (“IT”) – in terms of security for all critical scenarios – may become synergic elements of a common “short to long term” vision, producing huge societal benefits and shared business opportunities. The dire short-term societal need and market demand for radically more trustworthy IT systems for citizens privacy and security and societal critical assets protection, can very much align – in a grand strategic cyberspace EU vision for AI and IT – with the medium-term market demand and societal need of large-scale ecosystems capable to produce AI systems that will be high-performing, low-cost and still provide adequately-extreme levels of security for AI critical scenarios.

But let’s start from the state of the debate on the future of AI, machine super-intelligence, and the role of IT security.

In recent years, rapid developments in AI specific components and applications, theoretical research advances, high-profile acquisitions from important global IT giants, and heart-felt declaration on the  dangers of future AI advances from leading global scientists and entrepreneurs, have brought AI to the fore as both (A) a key to economic dominance in IT, and other business sectors, as well as (B) the fastest emerging existential risk for humanity in its possible evolution into uncontrolled machine super-intelligence.

Google, in its largest EU acquisition this year acquired for 400M€ a global AI leader, DeepMind; already invested by Facebook primary initial investors Peter Thiel and Elon Musk. Private investment in AI has been increasing 62% a year, while it is not known – but presumably very large and fast increasing – the level of secret investments by multiple secretive agencies of powerful nations, such as the NSA, in a possibly already-started winner-take-all race to machine super-intelligence among public and private actors.

Global AI experts on average estimate that there is a 50% chance to achieve human-level general artificial intelligence by 2040 or 2050, while not excluding significant possibilities that it could be reached sooner. Such estimates may be strongly biased towards later dates because: (A) there is an intrinsic interest in those that are by far the largest investors in AI – global IT giants and USG – to avoid risking a major public opinion that a major political; (B) As it has happened for surveillance program and technologies of Five Eyes countries, it plausible or probable that huge advancements in AI capabilities and programs may have already happened but successfully kept hidden for many years and decades, even while involving large numbers of people.

Many and increasing numbers of experts believe that progress beyond such point may become extremely rapid, in a sort of “intelligence explosion”, posing grave questions on humans ability to control it at all. (See Nick Bostrom TED presentation). Very clear and repeated statements by Stephen Hawking (the most famous scientist alive), by Bill Gates, by Elon Musk (main global icon of enlightened tech entrepreneurship), By Steve Wozniak (co-founder of Apple), agree on the exceptionally grave risks posed by uncontrolled machine super-intelligence.

Elon Musk, shortly after having invested in DeepMind, even declared, in an erased but not retracted comment:

“The pace of progress in artificial intelligence (I’m not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast-it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five-year timeframe. 10 years at most. This is not a case of crying wolf about something I don’t understand.”

I am not alone in thinking we should be worried. The leading AI companies have taken great steps to ensure safety. The recognise the danger, but believe that they can shape and control the digital superintelligences and prevent bad ones from escaping into the Internet. That remains to be seen…”

Such Open Letter is an incredibly important and well-thought out, and important to increase the chance that the overall impact of AI in coming decades – large in the medium term and huge in the long-term by all account – will be in accordance to humanities values and priorities. Nonetheless, such document comes with what we believe to be potentially gravely erroneous assumptions about the current state-of-the-art and R&D directions in IT security of high-assurance systems, which in turn would potentially completely undermine its verification, validity and control. 

In general, the such Open Letter overestimate the levels of trustworthiness, measurability, the at-scale costs, of existing and planned highest-assurance low-level computing systems and standards. 

More in detail, here are line by line suggestions to the Short Term Research Priorities – 2.3.3 Security section, from page 5: 

2.3.3   Security

Security research can help make AI more robust.

A very insufficiently-secure AI system may be greatly “robust” in the sense of business continuity, risk management and resilience, but still be extremely weak in safety or reliability of control. This outcome may sometimes be aligned with the AI sponsor/owner goals – and those of other third parties such as state security agencies, publicly or covertly involved – but be gravely misaligned  to chances to maintain a meaningful democratic and transparent control, i.e. having transparent reliability about what the system, in actuality, is set out to do and who, in actuality, controls it.

Much more important than “robustness”, adequate security is the most crucial foundation for AI safety and actual control in the short and long terms, as well as a precondition for verification and validity. 

As AI systems are used in an increasing number of critical roles, they will take up an increasing proportion of cyber-attack surface area. It is also probable that AI and machine learning techniques will themselves be used in cyber-attacks.

There is a large amount of evidence that many AI techniques have long been and are [1] currently being used by the most powerful states intelligence agencies, to attack – often in contrast with national or international norms – end-users and IT systems, including IT systems using AI. As said above, while it is not known the levels of investment of public agencies of powerful nations such as the NSA, is presumably very large and fast increasing,  in a possibly already started race against among public and private actors. The distribution of such finding aims most likely will follow the current ratio of tens of times more resources to offensive R&D rather than defensive R&D.

Robustness against exploitation at the low-level is closely tied to verifiability and freedom from bugs. 

This is a correct although partial. Especially for use in critical and ultra-critical use cases, which will become more and more dominant.

   It is better to talk about auditability in order not get confused with (formal) IT verification. It is crucial and unavoidable to have complete public auditability of all critical HW, SW and procedural components involved in an AI systems life-cycle, from certification standards setting, to CPU design, to fabrication oversight. In fact, since 2005 US Defense Science Board has highlighted how “Trust cannot be added to integrated circuits after fabrication” as vulnerabilities introduced during fabrication can be impossible to verify afterwards. Bruce Schneier, Steve Blank, and Adi Shamir,  among others, have clearly said there is no reason to trust CPUs and SoCs (design and fabrication phases). No end-2-end IT system or standards exist today that provide such complete auditability of critcal components. 

   “Freedom from bugs” is a very improper term as it excludes voluntarily introduced vulnerabilities, or backdoors, and it should clearly differentiate between critical and non-critical bugs. Vulnerabilities may be accidental (bug) or voluntary (backdoor). It is often impossible to prove that a vulnerability was introduced voluntarily and not accidentally. We should talk of “Freedom from critical vulnerabilities
It is impossible, and most probably will remain so, to ensure perfectly against critical vulnerabilities, given the socio-technical complexity of IT socio-technical systems even if simplified by 10 or 100 times, and with radically higher levels of auditing relative to complexity.
Nonetheless, it remains
extremely crucial and fundamental that adequate research could device ways to achieve sufficiently-extreme level confidence about “freedom from critical vulnerabilities” through new paradigms to achieve sufficient user-trustworthiness that sufficient intensity and competency of engineering and auditing efforts relative to complexity have been applied, for all critical software and hardware components that are actually running on the involved device. No system or standard exist today to systematically and comparatively assess – for such target levels of assurance for a given end-2-end computing service, and its related life-cycle and supply-chain.  

As stated above, all AI systems in critical use cases – and even more crucially those in advanced AI system that will soon be increasingly approaching machine super-intelligence – will need to be so robust in terms of security so such as extent that they are resistant against multiple extremely-skilled attackers willing to devote cumulatively even tens or hundreds of millions of Euros to compromise at least one critical components of the supply chain or life-cycle, through legal and illegal subversion of all kinds, including economic pressures; while having high-level of plausible deniability, low risk of attribution, and (for some state actors) minimal risk of legal consequences if caught.

In order to reduce substantially this enormous pressure, it may be extremely useful to research socio-technical paradigms by which sufficiently-extreme level of AI systems user-trustworthiness can be achieved, while at the same time transparently enabling due legal process cyber-investigation and crime prevention. The possible solution of such dichotomy would reduce the level of pressure by states to subvert secure high-assurance IT systems in general, and possibly – through mandatory or voluntary standards international lawful access standards – improve the ability of humanity to conduct cyber-investigations on the most advanced private and public AI R&D programs.

For example, the DARPA SAFE program aims to build an integrated hardware-software system with a flexible metadata rule engine, on which can be built memory safety, fault isolation, and other protocols that could improve security by preventing exploitable flaws [20]. Such programs cannot eliminate all security flaws (since verification is only as strong as the assumptions that underly the specification), but could significantly reduce vulnerabilities of the type exploited by the recent “Heartbleed bug” and “Bash Bug”.

There is a need to avoid the risk of relying for guidance on high-assurance low-level systems standard/platform projects from defense agencies of powerful nations, such as the mentioned DARPA SAFE, NIST, NSA Trust Foundry Program, DARPA Trust in Integrated Circuits Program, when it is widely proven that their intelligence agencies (such as NSA) have gone to huge length to surreptitiously corrupt technologies and standards, even those that are overwhelmingly used internally in relatively high-assurance scenarios.

Such systems could be preferentially deployed in safety-critical applications, where the cost of improved security is justified.

The cost of radically more trustworthy low-level system for AI could become very comparable to those of current corporate-grade security IT systems, mostly used as standard in AI systems development. Those costs differentials could possibly be reduced to being insignificant through production at scale, and open innovation models to drive down royalty costs. For example, hardware parallelization of secure systems and lower unit costs, could make so that adequately secure systems could compete or even out compete in cost and performance those other generic systems. (The emerging non-profit User Verified Social Telematics consortium, for example, show the possibility of creating sufficiently-secure general-purpose computing systems running at 1-300Mhz with a cost made of cost of production (few tens of euros depending on quantity), and overall royalty costs of only 30% of the end-user cost.)

At a higher level, research into specific AI and machine learning techniques may become increasingly useful in security. These techniques could be applied to the detection of intrusions [46], analyzing malware [64], or detecting potential exploits in other programs through code analysis [11].

There is a lot of evidence to show that R&D investment on solutions to defend devices from the inside (that assume failure in intrusion prevention), could become end up increasing the attack surface if those systems life-cycle are not themselves subject to the same extreme security standards as the low-level system on which they rely upon. Much like antivirus tools, password storing application and other security tools are often used a ways to get directly to a user or end-point most crucial data. Recent scandal of NSA, Hacking Team, JPMorgan show the ability of hackers to move inside extremely crucial system without being detected, possibly for years. DARPA high-assurance program highlight how about 30% of vulnerabilities in high-assurance systems are introduced by internally security products.[2]

It is not implausible that cyber attack between states and private actors will be a risk factor for harm from near-future AI systems, motivating research on preventing harmful events.

Such likelihood is clearly higher than “not implausible”. It is not correct to say that it “will be a risk factor” as it is already a risk factor and at least one of the parties in the such cyber attacks, powerful states, are now extensively using and expectedly aggressively advancing AI tools.

As AI systems grow more complex and are networked together, they will have to intelligently manage their trust, motivating research on statistical-behavioral trust establishment [61] and computational reputation models [70].

Interoperability framework among AI systems, and among AI and IT systems, will need effective independent ways to assess the security of the other system. As stated above, current comparative standards are lacking so comprehensiveness and depth to make it impossible to compare the security of a given system.

Ultimately, it may be argued that IT security is about the nature of the organizational processes involved and the intrinsic constrains and incentives critically involve in individual within such organizations. Therefore, the most critical security factor to be researched, for critical AI systems in the short and long term, is probably will be the technical proficiency and citizen accountability of the organizational processes, that will govern the setting of key AI security certification standards or system, and the socio-technical systems, that will be deployed to ensure extremely effective and citizen-accountable oversight processes of all critical phase in the supply-chain and operational life-cycle of the AI system.

The dire short- term societal need and market demand for radically more trustworthy IT systems for citizens privacy and security and societal critical assets protection, can very much align in a grand strategic cyberspace EU vision to satisfy – in the medium and long-term – both the huge societal need and great economic opportunity of creating large-scale ecosystems able to produce AI systems that will be high-performing, low-cost and still provide adequately-extreme levels of security for AI critical scenarios.

NOTES

[1] See the National Security Analysis Center or the capabilities offered by companies like Palantir

[2] https://youtu.be/3D6jxBDy8k8?t=4m20s

Posted in work1 | Leave a comment

” Now imagine that some fiendish crime syndicate were to steal such a car, strap a gun to the top, and reprogram it to shoot people. That’s an AI weapon.”

http://www.theatlantic.com/technology/archive/2015/08/humans-not-robots-are-the-real-reason-artificial-intelligence-is-scary/400994/

Posted in work2 | Leave a comment

The robots aren’t taking our jobs; they’re taking our leisure

But what about the bounty of digital technology that is in evidence all around us? Almost 30 years ago, the great economist Robert Solow quipped, “You can see the computer age everywhere but in the productivity statistics.”

An answer to the riddle might be that digital technology has transformed a handful of industries in the media/entertainment space that occupy a mindshare that’s out of proportion to their overall economic importance. .

http://www.vox.com/2015/7/27/9038829/automation-myth?utm_campaign=vox&utm_content=chorus&utm_medium=social&utm_source=twitter

Posted in work2 | Leave a comment

Blaming China for cyber attacks without any public evidence creates highly-perverse dynamics

Blaming China for cyber attacks without any public evidence creates highly-perverse dynamics: (1) breached entity, instead of paying in liability/blame for lack of security, can turn itself into victim of act of war; (2) increases support for requests by defense  agencies/contractors for huge funds and anti-privacy anti-privacy legislation; (3) any expert or media who challenges misattribution becomes enemy of the state; (4) no serious investigation in who really behind attacks, why they did it, and why they succeeded; (5) retaliation from China can just make all of this escalate.

Please, every expert go out there and challenge the actual evidence (and lack thereof) of China government responsibility in the attacks!

Posted in work2 | Leave a comment

A definition of “Constitutionally-meanigful levels of trustworthiness” in IT systems

A proposed definition of “Constitutionally-meanigful levels of trustworthiness” in IT systems

An IT system (or more precisely a end-2-end computing service or experience) will be said to have “constitutionally-meaningful levels of trustworthiness” when its confidentiality, authenticity, integrity and non-repudiation is sufficiently high to make its use – by ordinary, active and “medium-value target” citizens alike –rationally compatible to the full and effective Internet-connected exercise of their core civil rights, except for voting in governmental elections.  In concrete terms, it defines an end-2-end computing experience that warrants extremely well-placed confidence that the cost and risks for an extremely-skilled attacker to remotely perform continuous or pervasive comprimization substantially exceed the following: (1) for comprimization of a single user, the tens of thousands of euros, and the significant discoverability, such as those associated with enacting such level of abuse through on-site, proximity-based user surveillance, or non-scalable remote endpoint techniques, such as NSA TAO; (2) For the comprimization of the entire supply chain or lifecycle, the tens of millions of euros and significant discoverability, that are reportedly typically sustained by advanced actors, for high-value supply chains, through legal and illegal subversions of all kinds, including economic pressures.”

Posted in work1, work2 | Leave a comment

Motives of the Hacking Team hack may have much in common with those that broughtin 1903 the British Mr Maskelyne – and possibly its UK corporate/state sponsors – to hack Marconi’s radio telegraph in 1903 …

… to establish their tech/service as the “secure” remote communications of choice for global corporations and governments:

Maskelyne followed his trick with an even bigger showstopper. In June 1903, Marconi was set to demonstrate publically for the first time in London that morse code could be sent wirelessly over long distances. A crowd filled the lecture theatre of the Royal Institution while Marconi prepared to send a message around 300 miles away in Cornwall. The machinery began to tap out a message, but it didn’t belong to the Italian scientist.

“Rats rats rats rats,” it began. “There was a young fellow of Italy, who diddled the public quite prettily …” Maskelyne had hijacked the wavelength Marconi was using from a nearby theatre. He later wrote a letter to the Times confessing to the hack and, once again, claimed he did it to demonstrate the security flaws in Marconi’s system for the public good.

Of course cable could be undetectably be “sniffed” then as fiber cable can be sniffed today …

Posted in work2 | Leave a comment

A Case for Apple, Google, Facebook, etc. to promote an international standard for high-assurance IT international standard for wide public use

See: http://www.openmediacluster.com/2015/07/15/case-for-global-it-giants-to-promote-an-international-standard-for-high-assurance-it-for-wide-civilian-use/

Posted in work1 | Leave a comment

if sousveillance tools do not have sufficiently extreme levels of security and user-accoutability, they become additional tool of the powers-that-be…

This article – “Indian cops want Bangalore’s citizens to help them catch criminals by using Periscope” – makes me think that if sousveillance tools do not have sufficiently extreme levels of security and user-accoutability, they become additional tool of the powers-that-be…

Even a Transparent Society – which could replace this one if we fail technically to find ways to provide meanigful priovacy to all – presupposes that we achieve extreme levels of user-trustworthiness of at least part of our IT system, so as to ensure effectively symmetric transparency.

Posted in work2 | Leave a comment

Who sets the security standards for lawful access systems like Hacking Team team?!

After what came out of the Hacking Team scandal, we should consider if the standards for such techs, crucial for society – that many governments want extended as mandatory to other IP communications – maybe we have a problem at their origina, i.e. with their international governance by NIST and ETSI, the non-binding bodies that set their standards (which are then mostly updaken by national governments).  If we know NIST has broken crucial crypto standards on pressure fom NSA, here is the formal governance of ETSI, which is then deeply participated in its process by industry players :

 

Screen Shot 2015-07-10 at 10.12.15

Posted in work2 | Leave a comment