Thorsten Stremlau, co-chair of TCG’s Marketing Work Group, discusses how security of data systems for AI can be kept strong
Attacks on artificial intelligence (AI) differ from the typical cyber security threats seen on a daily basis, but this does not mean they are at all infrequent. Hacking continues to become increasingly sophisticated, and has evolved from simply hiding bugs in code. Unless properly secured, hackers are able to tamper with these systems and alter its behaviour in order to ‘weaponise’ AI. This provides a perfect way for hackers to obtain sensitive data or corrupt systems designed to authenticate and validate users, with no easy fix should an attack be successful.
Where security is considered, it is not only important to look at the aspects of a rogue AI, but also how the data sets of a system can be secured. To successfully protect against an attack, we must look at the four interactional elements of machine learning and how each can be affected by a malicious attack.
Protecting the key elements
Machine Learning is typically made up of four elements. The first element to review is the data sets — data is provided to a machine in order to function, to learn from, and based on the information provided, make a decision. In this instance, data can be any processed fact, value, sound, image, or text that can be interpreted and analysed by AI. From the outset, users must ensure that the data provided to the machine is made up of meaningful, accurate information
Algorithms are another factor to consider. These are defined as mathematical or logical problems that are able to feed data into a model. For a secured system, any algorithm used must be adjusted to the unique problem trying to be solved so that it represents the specific model and the nature of the data provided.
Another element of machine learning is models — computational representations of a real-world process. Users who train models will be able to make predictions which mirror real life. The data incorporated into a model should then be able to increase the accuracy level of the process over time. It is vital the model is provided with trusted data in order to avoid a deviation of the machine learning model predictions.
Last but certainly not least, training is the process that allows machine learning models to identify patterns in order to make decisions. To mitigate corruption of this element, any training applied must be provided by a trusted source to ensure that supervised, unsupervised and reinforcement learning does not force the model to deviate from its accurate feature extraction and predictions.
Using ‘trusted’ principles
A ‘Trusted Computing’ model, like the one developed by the Trusted Computing Group (TCG), can be easily applied to all four of these AI elements in order to fully secure a rogue AI. Considering the data set element of an AI, a Trusted Platform Module (TPM) can be used to sign and verify that data has come from a trusted source.
A hardware Root of Trust, such as the Device Identifier Composition Engine (DICE), can make sure that sensors and other connected devices maintain high levels of integrity and continue to provide accurate data. Boot layers within a system each receive a DICE secret, which combines the preceding secret on the previous layer with the measurement of the current one. This ensures that when there is a successful exploit, the exposed layer’s measurements and secrets will be different, securing data and protecting itself from any data disclosure. DICE also automatically re-keys the device if a flaw is unearthed within the device firmware. The strong attestation offered by the hardware makes it a great tool to discover any vulnerabilities in any required updates.
A TPM can also provide mechanisms that protect, detect, attest, and recover from any attempts to modify the code, maliciously or otherwise. An example of this would be prohibiting changes to critical temperature readings. This is especially important when it comes to safeguarding the algorithms used within an AI system. Furthermore, any deviations of the model, should any bad or inaccurate data be provided, can be easily prevented by applying trusted principles regarding cyber resilience, network security, sensor attestation and identity. Businesses can also ensure the training given to machine learning is secure by making sure the entities providing this have adhered to the Trusted Computing standards.
A secured system
It is imperative that in the wake of growing attacks on AI, businesses continue to educate themselves on the threats they may encounter, identify vulnerabilities within their existing systems and take preventative steps before it’s too late. Using Trusted Computing standards and hardware such as DICE can help provide a strong layer of defence against malicious intent, protecting sensitive data and ensuring organisations avoid any severe damage, be it financial or reputational.
Written by Thorsten Stremlau, co-chair of TCG’s Marketing Work Group
Related:
Unlocking the potential of AI to increase customer retention — Zac Sprackett, chief product officer at SugarCRM, discusses how unlocking the potential of AI could hold the key to increasing customer retention.
Data Privacy Day 2022: keeping data secure in the organisation — With today being Data Privacy Day, we explore what organisations need to know about keeping the data at their disposal secure.