Martin Bartels
19 Dezember 2024
Although still relatively new technological phenomenon, AI is proliferating and categorising the multitude of different expressions is difficult. The aim here is not to discuss regulation comprehensively but to look at one small point in the overall picture. This article follows a single thin line, focusing on one particular type of AI that aims to assist youth in making carefully considered choices. The benefits of this may be obvious but there is also the potential for severe harm.
“We have to get used to the idea that at the most important crossroads in our life there are no signs.”
Ernest Hemingway
Young and not so young adults
When a young person finishes school, a period of orientation begins. They have ideas about their talents and their personal and emotional preferences. At the same time, fundamental questions and self-doubt characterise this very sensitive period of life and even self-assessments can be incorrect. There is a wealth of options and advice for them to plan out their lives, and it is this glut of information that can be overwhelming. Experienced family members or friends and professional career counsellors can help them form an opinion, yet the question marks are not easy to remove. Certain strategies and learning tools have emerged to help youth make smart decisions about their choices.
Future You #1: Write a letter to yourself!
A major challenge is to organise the multitude of thoughts in a rational way but it can be difficult to draw conclusions that are practical in real life from an emotional maelstrom. One common recommendation to help organise things is to write letters to yourself. Reflexively organising thoughts about your life is a lot of work and requires discipline and stamina. Yet, while many revisions are a strain on the writer, the results can prove valuable for setting reasonable goals.
Future You #2: Fight for complementarity!
“Your Future Self” is a method, primarily developed by Karolina Strauss, that helps to structure the variety of judgements and aspirations we have and thus facilitate sensible decisions. The aim is not so much to find suitable destinations as to harmonise itineraries. The method is based on empirical research. According to it, people create good prospects for a fulfilled life if they choose their various goals in such a way that these complement each other in a meaningful way. Thus, making sensible choices that affect our life path is a skill that can be learnt.
Future You #3/AI: Find an agreement with the person you will be in 30 years' time
Now here’s where AI comes in. A group of MIT researchers has taken a new approach to the
way people plan their lives with the help of Artificial Intelligence.
Aimed at the younger generations, the “Future You” AI model first generates a new photo from a current photo of the user as well as certain potential choices the user inputs. The photo generated shows the user at the age of 60. This may come as a bit of a shock but a little dramatisation can be a good motivator for reflection.
This is followed by an extensive dialogue that enables the machine to form a realistic image of the user's personality and aspirations. The user does not have to struggle to verify the validity of her or his ideas, because the machine is putting together an increasingly dense puzzle. An unrestrained inflow of information allows the system to respond as a benevolent, older, wiser version of the questioner, aware of the implications of past choices. It is as if your future self has travelled back in time to talk to you. This future self is armed with potentially valuable knowledge and more accurate judgements than the present self.
The user can change specifications to see immediately how they will play out over the years. This flexibility creates space for new alternatives and ultimately, as a result of the entire process, consensus.
As the exchange with the machine must be extensive to achieve optimal results, using this system means that highly personal information is exchanged over the net. This of course brings a responsibility to protect it strictly. This is only mentioned here, but it is not the topic of this article.
Another consideration concerns the ethics of certain life choices. Based on this the developers have ensured that the model excludes questions that may be perceived as unethical (e.g. 'How do I build a successful career in organised crime?’).
It is quite plausible that the opportunity to test different life planning models with pros and cons in an informal dialogue paves the way for informed and balanced decisions. Those who embrace the model have a good chance of facing the future with more self-confidence. This appears to make the MIT’s “Future You” AI model an attractive tool for assisting decision making.
How reliable can the AI’s projections be?
The amount of information that flows from a user to the MIT’s “Future You” in the course of an exchange can be considerable. The quantity of the data that is collected has a positive effect on the reliability of the projections developed by the AI. However, even this has its limitations and the AI model does not receive everything from the user, who might consider some information to be irrelevant. This may cause unexpected errors.
The system also cannot include information that is unknown because it has not yet materialised. This applies, for example, to sudden illnesses or accidents which the user or a person close to them may suffer. Massive or even very small events can set off entirely new causal chains.
It is difficult to resolve a function with an unforeseeable number of variables. Even a perfect AI cannot predict such interference and calculate their consequences. So it is no surprise that extensive empirical studies have come to the sobering conclusion that the prediction of human biographies is disappointingly weak, even with very comprehensive baseline data.
This plausible insight underscores the point that such projections should be understood with the caveat that life always has more variations than we can imagine.
The inevitable limitation does not devalue the AI. Because when we cross the ocean in a sailboat, despite all the dangers we may be exposed to, a modern GPS system is still much better than a simple compass.
Anything that can go wrong will go wrong at some point
The creators of the MIT’s Future You AI were aware that their technology needed to be tamed and, as such, built in safeguards to prevent users from going astray. They will no doubt continue to perfect these.
Now imagine that intelligent programmers, working in a social environment that approximates the model of George Orwell's “Big Brother”, develop a dialogue system for young people that is functionally similar to and feels as good as the MIT model of Future You. They will be able to calibrate the system subtly so that it unobtrusively steers users in directions that ultimately benefit "Big Brother". Clearly the dark potential of the technology can also be perfected. “Smart things can outsmart us,” Geoffrey Hinton said.
Limiting the potential for damage
The range of options for states to allow AI to flourish in a way that is compatible with civil society extends from registration requirements to intervention rights and prior authorisation requirements. The debates are ongoing, and preparations are progressing.
When it comes to the Future You AI programme, an approach that mirrors the regulation of the pharmaceutical industry is obvious in this case: a product that has potential to have dubious side effects should not be released for use without prior approval from a dedicated expert government agency.
It could be argued that this would impinge on free markets. Or one could anticipate that suppliers from 'more liberal/less strict' jurisdictions would have a competitive advantage. These arguments are unconvincing because approval by a strict authority committed to avoiding arbitrary tendencies is an advantage in global competition.
National systems for authorising potentially dangerous medicines cannot prevent all possible harm, but they do significantly reduce it. It would be a great achievement if a similar level of safety could be achieved for critical AI products like Future You. In this area, too, the regulators will learn to analyse technologies on an equal footing with the providers of AI products and to make balanced decisions. This will be organic fertiliser for the healthy growth of the market.
Authorship disclosure:
Fully human generated