Cyber Law Watch

Insight on how cyber risk is being mitigated and managed across the globe.

1
New Guidance Released for Australian Listed Companies on Continuous Disclosure Obligations During a Cyber Incident
2
Tennessee Moves First on AI Protections With ELVIS Act
3
Anticipated Tightened Data Privacy Regulations: Raid on Worldcoin
4
ICO Introduces Consultation Series on Data Protection and Generative AI
5
“Grandma, I have [not] been kidnapped”: The FCC Bans AI-Generated Robocalls
6
FTC Issues First Order Prohibiting Sale of Sensitive Location Data
7
FTC Bans Rite Aid from Using AI Facial Recognition Without Reasonable Safeguards
8
CJEU Decides on Use of Automatically Generated Scoring Values
9
CJEU Holds German Provisions for Imposing Fines on Companies for GDPR Violations Invalid
10
Provisional Political Agreement on Landmark AI Regulation in Europe

New Guidance Released for Australian Listed Companies on Continuous Disclosure Obligations During a Cyber Incident

By: Cameron Abbott, Andrew Gaffney, Harry Kingsley, Rob Pulham, and Stephanie Mayhew

Australia’s corporate regulator, ASIC, has released new guidance on how to comply with market disclosure requirements when a listed company is in the middle of investigating and responding to a cyber incident.

Read More

Tennessee Moves First on AI Protections With ELVIS Act

By Jason W. Callen and Christopher J. Valente

On 21 March 2024, Tennessee became the first state in the United States to prohibit unauthorized use of artificial intelligence (AI) to replicate an individual’s likeness, image, and voice when its governor signed the Ensuring Likeness, Voice and Image Security Act of 2024 (ELVIS Act). The protections in the ELVIS Act for a person’s voice from AI misuse is particularly notable. Tennessee, like other states, already had prohibitions on unauthorized use of an individual’s likeness and image. And while some other states, such as California, have also protected a person’s voice, none had expressly linked all three—likeness, image, and voice—to AI.

Read More

Anticipated Tightened Data Privacy Regulations: Raid on Worldcoin

By Paul Haswell and Sarah Kwong

In late January 2024, Hong Kong’s privacy watchdog, the Personal Data Privacy Commission (“PCPD”) raided six premises of Worldcoin, a cryptocurrency initiative co-founded by Sam Altman, that requires an iris scan from clients for identification purposes and also for earning tokens. The PCPD conducted an investigation into Worldcoin’s operations, suspecting that its sensitive personal data (i.e. iris information) collection practices might infringe the Personal Data Privacy Ordinance (Cap. 486).

Read More

ICO Introduces Consultation Series on Data Protection and Generative AI

By Claude-Étienne Armingaud & Sophie Verstraeten

The Information Commissioner’s Office (ICO) recently launched a consultation series on how data protection laws should apply to the development and use of generative AI models (“Gen AI”). In the coming months, the ICO will publish further views on how to interpret specific requirements of UK GDPR and Part 2 of the DPA 2018 in relation to Gen AI. This first part of the consultation focusses on whether it is lawful to train Gen AI on personal data scraped from the web. The consultation seeks feedback from stakeholders with an interest in Gen AI.

Read More

“Grandma, I have [not] been kidnapped”: The FCC Bans AI-Generated Robocalls

By Andrew Glass, Gregory Blase, and Joshua Durham

Effective immediately, the Federal Communications Commission (FCC) banned AI-generated phone calls with its recent Declaratory Ruling (the Ruling). Known as audio or voice “deepfakes,” AI can be trained to mimic any person’s voice, resulting in novel scams such as grandparents receiving a call from their “grandchild” and believing they have been kidnapped or need money for bail. FCC Commissioner Starks deemed such deepfakes a threat to election integrity, recalling that just recently, “potential primary voters in New Hampshire received a call, purportedly from President Biden, telling them to stay home and ‘save your vote’ by skipping the state’s primary.”

Read More

FTC Issues First Order Prohibiting Sale of Sensitive Location Data

By Eric F. Vicente Flores and Whitney E. McCollum

On 9 January, 2024, the Federal Trade Commission (FTC) issued its first settlement prohibiting a data broker from sharing or selling sensitive location data, and required deletion of all location data collected deceptively. The FTC alleged that X-Mode Social (“X-Mode”), and Outlogic, LLC (“Outlogic”), X-Mode’s successor firm, failed to implement reasonable and appropriate safeguards on the use of such information by third parties. X-Mode/Outlogic collected personal information, including location data via its mobile applications, which it would then sell to third parties. 

Read More

FTC Bans Rite Aid from Using AI Facial Recognition Without Reasonable Safeguards

By Whitney E. McCollum and Eric F. Vicente Flores

The Federal Trade Commission (FTC) issued a first-of-its-kind proposed order prohibiting Rite Aid Corporation from using facial recognition technology for surveillance purposes for five years.

The FTC alleged that Rite Aid’s facial recognition technology generated thousands of false-positive matches that incorrectly indicated a consumer matched the identity of an individual who was suspected or accused of wrongdoing. The FTC alleged that false-positive matches were more likely to occur in Rite Aid stores located in “plurality-Black” “plurality-Asian” and “plurality-Latino” areas. Additionally, Rite Aid allegedly failed to take reasonable measures to prevent harm to consumers when deploying its facial recognition technology. Reasonable measures include: inquiring about the accuracy of its technology before using it; preventing the use of low-quality images; training or overseeing employees tasked with operating the facial recognition technology; and implementing procedures for tracking the rate of false positive matches.

Read More

CJEU Decides on Use of Automatically Generated Scoring Values

By Dr. Thomas Nietsch

In its judgment dated 7 December 2023 (C-634/21 – Schufa) presented by the Administrative Court Wiesbaden (Germany), the court held that Article 22 of the GDPR (Art. 22 GDPR) applies also to probability values that are created by credit scoring agencies on the basis of personal data and used by third parties in order to decide whether the respective individual is eligible for a credit or establishing a contract.

Read More

CJEU Holds German Provisions for Imposing Fines on Companies for GDPR Violations Invalid

By Dr. Thomas Nietsch

In a judgment dated 5 December 2023 (Case C-807/21 – Deutsche Wohnen) presented by the Higher Regional Court Berlin (Kammergericht), the Court of Justice for the European Union (CJEU) held that a German law permitting administrative fines against corporate entities where an identified legal representative of that entity was proven to have committed a criminal or administrative offence, which at the same time led to the corporate entity breaching its obligations, is not in line with GDPR.

Read More

Provisional Political Agreement on Landmark AI Regulation in Europe

By Giovanni Campi, Petr Bartoš, and Kathleen Keating

In a landmark development, EU lawmakers reached on 8 December 2023 a provisional political agreement on the Artificial Intelligence Act (AI Act). Once adopted, this regulation will be the first of its kind, and could set a global standard for AI laws around the world.

Read More

Copyright © 2024, K&L Gates LLP. All Rights Reserved.