This experiment sounds a bit like recording and replaying yourself to come to independent conclusion about the error of your ways.
Initial data on Laika is promising: 75 percent of the 60,000 students who have participated in the program since October 2023 reported that they wanted to change their relationship with social media after chatting with Laika, according to the team. However, the long-term impact of the program remains to be seen.
And Laika’s impact might be more complicated than it seems. Julia Stoyanovich, the director of NYU’s Center for Responsible AI, expressed concerns about using a project like this with children, a vulnerable population, without prior evidence of its efficacy.
The data could be viewed as promising or not. That 25 percent that did not feel moved to change their relationship with social media after interacting with the bot, could be seen as a cohort to be deeply concerned about. Maybe the exact opposite happened - they want to redouble their presence and efforts in social media to the greater exclusion of everything else. Anyone who recalls being a teen or has raised one knows that teen-brains do not follow the same logical flows as adult one. They tend to do things that cannot be explained rationally but it all computes perfectly for them. The most at-risk population likely has ways of thinking and working that are many standard deviations away even for a teen. One could assume that forms the 25 percent and their troubles just got so much worse thanks to meddlesome AI.
Comments