Social media giant, Meta Platforms, has announced a suite of new AI tools to be added across their apps.

The new features have sparked a debate around the role of artificial intelligence in challenging notions of transparency and privacy. While the tools are intended to streamline online engagement, experts are stressing the need to assess all aspects of the program and remain informed about what it means for your data. 

 

Meta is the parent company of apps such as Whatsapp, Facebook and Instagram

 

Technology reimagined 

According to Meltwater, a leading media, consumer and sales intelligence service, 21.3 million Aussies use social media every day. Of this, 63 percent use Facebook Messenger, 55.5 percent use Instagram, and 34.7 percent use Whatsapp. All of these services are controlled by Meta.

And now, thanks to the new AI tools, everyone using Meta will have access to a range of fresh features. This includes the ability to edit images, both changing the style and background of photos. Meta has also developed a conversational assistant that offers real-time information collected from Bing and creates photorealistic images from test prompts. 

Not only will individuals have access to a primary chatbot, but 28 other chatbots are being rolled out as well. These AI all have unique personalities voiced by celebrities such as Snoop Dogg and Kendall Jenner. 

The aim of these new features, according to Meta, is to increase our ability to connect with one another. They want to foster creativity and protection.

AI studio, a platform where developers and creators can build their own AI programs, has been designed to benefit small businesses by increasing engagement with customers and growing their databases. Similarly, they hope that creators will be able to diversify their content. 

 

The program will include some features similar to ChatGPT

 

Questions and concerns 

However, while Meta’s features have several positive implications, experts are concerned about what these developments mean for our right to privacy and information. They have confirmed that elements of individuals’ conversations with the AI chatbot may be shared with partners including search providers. Equally, the AI is built off a foundation of information collected before 2023, meaning some of the answers provided by the program will be out of date.

According to Associate Professor Sam Kirshner, from the UNSW School of Information Systems and Technology Management, it’s important to recognise that AI is no more reliable than traditional methods of information gathering.  

“Historically, our digital consumption was tied to specific sources like websites or blogs, where access was provided through search or newsfeed algorithms,” he explains.

“While Conversational AI will now start pulling information from websites, responses will likely still be underpinned by the foundational models and training datasets, meaning that the information presented can be unique to each individual user. Chatbots will likely result in hyper-personalised responses and conversations. Consequently, two users might walk away with starkly different interpretations of the same topic.”

He stresses that the information collected by the chatbot will be just as biassed to our individual thoughts and beliefs as the information currently available through search engines. 

Dr Sebastian Sequoiah-Grayson from UNSW School of Computer Science and Engineering, suggests that AI is still a new concept being introduced into our everyday lives, and thus it’s important to be critical. 

“Like any new pervasive technological breakthrough, AI will pose both risks and opportunities. Although many of these will be anticipated, many also will be things of genuine surprise – existing outside the space of our expectations.” 

He emphasises that AI will have “a profound impact on our understanding of testimony, identity, authorship, reasoning, understanding, and personhood. Our attitudes towards all of these things, and many more besides, will slide.”

 

 

A new age 

Luckily, Meta is aware of the risks and challenges posed by their new AI program, and have begun implementing strategies to protect users. 

For example, all AI edited images will contain a watermark to indicate the use of AI, stopping the product from being passed off as human-generated content. Meta is also exploring how markers, both visible and invisible, can be used to identify AI messages and texts. 

When using the chatbot, Meta has introduced safeguards that allow users to delete their conversations from the database. Similarly, the company has launched a Generative AI Privacy Guide which will help to keep the public informed about how their AI is developed and implemented. 

Professor Sam Kirshner acknowledges these efforts to maintain transparency. He believes that it’s a matter of continuously monitoring the progress and impact of AI, and updating our policies to respond. 

“AI risks fragmenting our collective understanding, as the very fabric of shared digital information diversifies based on individualised AI interactions,” he explains. 

“Generative AI may push us to redefine authenticity, which may emphasise intent over factual accuracy. This evolution highlights the importance of transparency in AI-driven creations and for consumers to be well-informed.”

“Meta is actively taking steps to address these issues. However, only time will tell whether Meta’s efforts to create transparent and responsible AI are effective.”

To learn more about the impact of AI, click here.