Human-level machine intelligence is on the path to becoming a reality. Accelerated research and innovation drive AI technology as better algorithms replace previous algorithms as new data or methods come into play. Modern-day AI is brilliant, and no wonder we see some hyperbolic headlines and news. There are more groundbreaking findings being put out concerning how AI can with and on human beings. For example, recent results reveal that AI can learn how to identify weaknesses in human behavior and use the data to influence decision-making.
Artificial intelligence has been transforming every aspect of human life. Different AI technologies or solutions have been deployed in varying parts of human life and enhance how we live and work. Some of the transformational use cases are found in vaccine development, office administration, and environmental management. The capabilities of AI have continued to grow as a result of accelerated growth and powerful reach.
We can rest easy and stop worrying about machines taking over since AI is yet to reach and surpass since it doesn’t possess human-like intelligence and emotion. However, a recent discovery is pushing the boundary and showcases the power of AI, and showing the importance of effective governance to prevent the unethical practice.
AI Can Learn How to Influence Human Behavior
Researchers at CSIRO’s Data61, the data and digital arm of Australia’s national science agency, have developed a method of identifying and exploiting vulnerabilities in the decision-making process. The team used a recurrent neural network and deep reinforcement learning to find and exploit human weaknesses. The model was tested by setting up three experiments that required human participants to play games against a computer.
The first experiment required the participants to click on red or blue colored boxes to win some fake currency. AI learned the choice patterns of the participants and helped guide them towards a specific choice. The study indicated that AI was successful 70% of the time.
The second experiment entailed participants watching a screen and pressing a button when a particular symbol, an orange triangle, appears on the screen. Other participants were also not required to press the button if another symbol such as a blue circle appears on the screen. Again, AI sought to arrange the sequence of symbols that forced the participants to make more mistakes. Similarly, the results showed an increase of about 25% in a number of errors.
The third experiment entailed participating in several rounds of pretending to be an investor giving out money to a trustee. The trustee, in this case, is the AI. The AI will then return a part of the money to the participant, who would then decide how much to invest in the subsequent round. The game was played in two different modes – the first mode entailed the AI maximizing how much money it ended up with. The second mode called for the AI to achieve a fair distribution of money between itself and the human investor. In each way, the AI was very successful at achieving the preset outcomes.
In each of the experiments, AI learned from the participants’ responses by identifying and targeting the vulnerabilities of human decision-making. In the end, the machine learned how to influence and direct participants to complete a specific action. Even though the findings are based on small and unrealistic situations, they indicate a direction that could be pursued using AI. Further research is required to determine how AI can be used to spot weaknesses and influence people’s choices to benefit society.
The findings of this research promote a better understanding of what AI can do and even how people make choices. In addition, the study reveals how machines can learn and help influence choice through day-to-day interactions with human beings.
Both AI and machine learning can spot people’s vulnerabilities in specific situations and helping them steer away from making bad choices. The findings have a vast range of possible applications suitable from enhancing behavioral sciences, driving public policy to improve social welfare, renewable energy, and understanding and influencing people to adopt healthy eating habits and lifestyle choices.
AI can also be used to prevent people from influence attacks. Machines can alert people when they are being influenced online and foster a behavior change to disguise our vulnerabilities. Effective governance is critical in ensuring that AI is not used for bad intentions. AI creation and innovation must happen within the bounds of the law so that AI is implemented responsibly.
Another requirement that will accelerate AI creation and deployment is robust data governance and access. AI and machine learning are usually hungry for vast amounts of data, hence the need for adequate data access and management systems. In addition, the implementation of privacy protections and a good consent process will help in data gathering and analysis.
AI regulation is a matter that has been under the microscope as the technology is deployed widely, even in sensitive use cases. There has been growing concern with potential AI misuse – for instance, influencing people to make bad choices for themselves or society. Efforts are underway to create an AI regulatory framework to weed out AI misuse cases and foster more significant growth of the AI and machine learning sectors. With the regulatory framework, people will be more receptive to AI, spot weaknesses, and influence human choices.
Author: Alessandro Civati
Blockchain ID: https://lrx.is/DSyEgVWvqe