Artificial intelligence (AI): Between the risk of data and data protection

artificial intelligence is BITKOM survey on the top issues of the digital economy, as recently a confirmed. The world market for applications in the fields of artificial intelligence (AI), cognitive computing and machine learning is before the breakthrough, so the digital Association BITKOM .

 Amazon German echo (image: Amazon) so should global sales of hardware, software and services around cognitive computing and machine learning to grow in the year 2017 by 92 percent to 4.3 billion euros. By the year 2020, the world market volume is then expected to 21.2 billion euros to more than fivefold. This growth, it seems many doesn’t make sense to put narrow limits, but the full potential should be used extensively to take advantage of the digitization.

but privacy advocates and consumer advocates warn, once again, some will say. data protection regulators see need to study the impact of digital technologies, for example, on data protection how modern robotics, artificial intelligence and machine learning will affect the informational self-determination and how data protection can respond to these challenges. Intelligent wizards, such as Google home or Amazon echo arouse concern and criticism. However, applications that go in the direction of artificial intelligence, help the data protection.

artificial intelligence also provides support for data protection

we know that companies have great difficulty quickly and reliably to detect data breakdown: 44 percent of companies are, according to CSI report from Balabit unable to detect privacy violations within 72 hours and to report. Not only in terms of on the (DSGVO) data protection regulation so urgent action is required. Artificial solutions can not only help here intelligence (AI), in future: over 40 customers use Watson security already IBM-technology. Chatbots like Demisto and Artemis help security analysts searching for possible attacks. Deutsche Telekom has competition for privacy bots a started, the Intelligent Assistant to users for example help to understand and examine the privacy sites. Even police work first AI systems are used.

many security providers extended their solutions to machine learning and artificial intelligence, so that AI will have significant consequences for cyber security. Some examples:

  • Palo Alto networks has taken LightCyber in order to strengthen its next generation security platform. By machine learning attacks should be stopped early because of the standard different operations within the network are detected even better and faster.
  • Sophos extends its next generation endpoint portfolio in the field of machine learning with acquiring Invincea .
  • the new Centrify analytics service uses machine learning to assessing risk and aimed at the implementation of access decisions in real time.
  • SentinelOne expanded its endpoint protection platform to a deep file inspection (DFI) engine. The new functionality uses machine learning and to identify the most advanced cyber threats and prevent their execution.

risk analysis of artificial intelligence is a must

AI systems will help the security risk analysis, they must, but are subjected to even a risk analysis. As for example that of life pointed out future Institute AI systems can be programmed so that that they wreak havoc and run for example even cyber attacks. Or but AI systems have good goals, but go paths that can cause damage. There are good reasons for this, if Google, Facebook, Amazon, IBM and Microsoft together with concern the possible consequences of AI and necessary guidelines for AI.

from point of view of data protection applies: “Has a data processing, especially when using new technologies, because of the way scope of circumstances and the purpose of the processing is expected to so leading a high risk to the rights and freedoms of individuals resulted in the officer advance an assessment of the consequences of the intended processing operations for the protection of personal data through”. A privacy impact assessment comprises a systematic description of the planned processing and the purposes of the processing, an assessment of the necessity and proportionality of the processing operations in relation to the purpose of an assessment of the risks to the rights and freedoms of data subjects and the corrective measures envisaged to deal with the risks. As far as the privacy law.

AI requires new privacy tools

bearing in mind but that AI systems by definition are self-learning and can go “its own way”, develop your own rules, the application of instruments of a privacy impact assessment is not easy, some will have the feeling, it is hardly possible to implement this as in other cases. Google researchers reported for example the fact that AI systems develop their own encryption to communicate . To verify this, in the form of privacy control will certainly not easier as conventional encryption tests where you can speak with the supplier companies at least from person to person.

data protection must therefore earlier insert AI systems as in the control of functions and procedures. In the sense of privacy by design, the algorithms must be designed so that they are transparent, controllable and privacy-friendly. Such demands were raised by the Association for computing machinery U.S. public policy Council (USACM) for example, in the “ statement on algorithmic transparency and accountability “. These ideas should now be included in the privacy discussion to artificial intelligence AI systems help privacy and hardly transparent data risks be avoided right from the outset.

Be the first to comment

Leave a Reply