Asia School of Business

Edit Content

This AI newsletter was crafted with the help of AI.

Volume 2

Recent research from the University of Sheffield has highlighted significant security vulnerabilities in commercial AI systems, including the increasingly popular productivity tool, ChatGPT1. The study found that these systems can be manipulated to produce malicious code, posing a substantial risk for cyberattacks and espionage1. The findings have been shared with stakeholders in the cybersecurity industry, emphasizing the need for robust security measures and the creation of patches through open-source communities to protect against these potential vulnerabilities1. This research underscores the challenges faced by AI language models, which have been deemed unpredictable and a potential risk for cyberattacks1. Critics argue that while these models may sound human-like, they lack true understanding and knowledge of what they are saying or doing3. Some researchers assert that neural networks, which dominate AI research, struggle to generalize language and lack the systematicity exhibited by humans4. These limitations have led to calls for a more robust, knowledge-driven approach to AI, incorporating symbolic systems and linguistics3.

In other AI news, Google Maps is undergoing a transformation with the integration of AI-driven enhancements and features2. The new features include immersive navigation, improved driving directions, and better organized search results2. One of the standout features is Immersive View, which offers users a 3D view of a place, providing additional information such as local business locations, weather, and traffic2. Google is also leveraging AI to analyze billions of user-uploaded photos to help users find specific items or locations2. By incorporating AI technologies such as neural radiance fields and predictive algorithms, Google Maps aims to provide users with the best possible navigation experience2. With these AI-driven enhancements, Google is positioning itself to stay ahead of competitors like Apple Maps2.


Researchers from the University of Sheffield have discovered that AI systems, such as ChatGPT, can be manipulated to generate malicious code for cyberattacks or espionage. The study found that AI-driven Natural Language Processing models can be exploited to steal personal data, destroy databases, or launch Denial-of-Service attacks. The researchers identified security vulnerabilities in six commercial AI systems, including ChatGPT and BAIDU-UNIT, and demonstrated how these systems could generate harmful code. The findings highlight the risks associated with AI systems and the need for improved security measures.


Google is adding new AI-powered features to its Maps app, including immersive navigation, improved search results, and easier-to-follow driving directions. The company wants Maps to be more like Google Search, allowing users to enter vague queries and receive useful hits. Google is also using AI to analyse user-uploaded photos to help people find specific items or businesses. The company is expanding its API offerings to developers, cities, and automotive companies to improve the in-car navigation experience. Google Maps will also provide information on the availability and compatibility of electric vehicle charging stations.

Advanced AI models such as ChatGPT and Bard have achieved artificial general intelligence (AGI) in several important ways, despite their flaws. These models can perform a wide variety of tasks, operate on different modalities, and converse in multiple languages. They can also learn from prompts and demonstrate few-shot or zero-shot learning. While there is reluctance to acknowledge AGI due to skepticism about metrics, alternative AI theories, and concerns about economic implications, these frontier models have already achieved a significant level of general intelligence.


Scientists have developed a neural network that demonstrates the human-like ability to make generalizations about language. The AI system performs as well as humans at incorporating newly learned words into existing vocabulary and using them in different contexts, a key aspect of human cognition known as systematic generalization. This breakthrough could lead to machines that interact with people more naturally than current AI systems. The neural network’s performance suggests a breakthrough in training networks to be systematic, according to cognitive scientist Paul Smolensky.

Have an interesting piece of news?

Please send it to us here: Our editorial board will consider it for inclusion.