🚀 US Space Force gets scared

Plus: Hollywood actors are still striking over AI

🎨 AI Image of The Week

🧠 AI and Machine Learning

9-billion parameter generative world model for autonomous driving (10 minute read)
GAIA-1 stands as an advanced generative world model designed specifically for autonomous driving applications. This innovative model learns and comprehends representations of the environment and its future dynamics, enabling a structured understanding of surroundings for informed decision-making during driving. Anticipating future events is fundamental for autonomous systems, allowing vehicles to plan and act accordingly, thereby bolstering safety and efficiency on the road. The integration of world models into driving systems holds great promise, enhancing their comprehension of human decisions and their ability to adapt to a variety of real-world scenarios. Furthermore, GAIA-1 utilizes video, text, and action inputs to create realistic driving videos, granting precise control over both ego-vehicle behavior and scene features. Its multi-modal capabilities allow it to generate videos from a diverse range of prompt modalities and combinations.

Google to defend generative AI users from copyright claims (2 minute read)
Google announced its commitment to defending users utilizing generative artificial intelligence systems within their Google Cloud and Workspace platforms against allegations of intellectual property infringement, aligning with similar assurances from companies like Microsoft and Adobe. These tech giants, including Google, have been significantly investing in generative AI and swiftly integrating it into their product offerings. However, this advancement has led to legal disputes where creators argue that the use of their work for AI training and the resulting content produced by these systems violate their copyrights. Google's newly stated policy extends to software, encompassing platforms like Vertex AI and Duet AI, known for generating text and images within Google Workspace and Cloud applications, although it notably excludes mention of Bard, their widely recognized generative AI chatbot program.

In The Age Of AI, Do We Have The Right To Die In Peace? (3 minute read)
The impact of artificial intelligence on various industries, such as film, journalism, and medicine, has transitioned from a specialized domain to everyday headlines. The economic and workforce implications of AI are now constantly under assessment. However, the social significance of AI, particularly its intersection with profoundly human experiences like death, remains an area requiring deeper exploration. One such realm is the use of AI in "grief tech," where companies employ AI to virtually resurrect the deceased, offering solace to grieving relatives or, in some cases, perpetuating the likeness of famous individuals who have passed away. The emergence of deepfakes featuring the deceased underscores a significant issue: in the absence of federal AI regulations in the United States, AI-generated content can be legally utilized and monetized without the consent of the individuals whose contributions form the basis of the datasets vital for AI development. Despite ethical concerns, some individuals find solace in the idea of engaging with AI representations of their departed loved ones.

💼 Business

Rewind Pendant is a wearable AI microphone that records and transcribes your conversations (2 minute read)
The Rewind Pendant, a wearable microphone designed to record and transcribe conversations, introduces a compelling yet privacy-invoking concept. The pendant aims to allow individuals to stay present in the moment while knowing that their conversations are being captured and made accessible. Rewind AI envisions various applications for this unique technology, with a mission to digitally document our lives and alleviate the burden on our memories. Their software, available for Mac and iOS, and soon for Windows, has already garnered attention by recording digital device activities. The Rewind Pendant represents a tangible transition from the virtual realm to our physical world. However, the announcement of a device capable of comprehensive recording has sparked both intrigue and concerns, particularly regarding privacy.

Hollywood actors remain on strike over AI, endorse ‘NO FAKES ACT’ (4 minute read)
The recent conclusion of the Hollywood writers' strike marked a victory with the introduction of new safeguards against AI involvement in screenwriting. However, the same fortune hasn't extended to actors. The Screen Actors Guild – American Federation of Television and Radio Artists (SAG-AFTRA) reported a breakdown in talks with industry CEOs, who reportedly rejected SAG-AFTRA's proposed terms for a new contract, including limitations on AI and 3D scanned likenesses of actors. To address these concerns, SAG-AFTRA is backing a new bill that aims to prohibit the production or distribution of unauthorized AI-generated replicas of individuals for use in audiovisual or sound recordings without the replicated individual's consent.

Misc

NASA: We'd Have 30 Minutes' Warning Before a Killer Solar Storm Hits Earth (3 minute read)
NASA's dedicated team has been actively employing AI models to analyze solar storm data with the aim of creating an early warning system. Their objective is to provide Earth with approximately 30 minutes of advanced notice before a potentially catastrophic solar storm strikes a specific region. However, simply predicting the arrival of a solar storm is not sufficient; understanding its potential impact on the Earth is equally crucial. To achieve this, researchers gathered data from surface-based stations affected by storms detected by satellites. They proceeded to train a deep learning model, aptly named DAGGER, which boasts impressive specifications compared to existing predictive algorithms attempting to accomplish a similar goal.

Space Force gets scared, Pauses all use of generative AI
The vigilant Guardians of Space Force, the dedicated American military unit tasked with safeguarding terrestrial inhabitants from potential space threats, have identified a new adversary: generative AI. In a recent announcement, Space Force leaders, referring to their military members as Guardians, have prohibited the use of generative AI tools like ChatGPT on government devices, citing security risks and other related concerns. In an internal memo obtained by Bloomberg, Space Force's chief technology and innovation officer, Linda Costa, acknowledged the revolutionary potential of generative AI in empowering their workforce but emphasized the necessity for responsible integration. For the time being, this responsible integration entails refraining from the use of generative AI tools due to apprehensions regarding cybersecurity, data handling, and procurement requirements.

🐥 Best of Twitter

Thanks for reading, if you enjoyed, tell your friends!