Cyber Security News & Tips by Glenda R. Snodgrass for The Net Effect
[ View this email in your web browser ] [ Visit our archives ] [ Sign Up for this Newsletter ]

March 14, 2023

Good morning, everyone!

This week’s critical vulnerabilities:

Patch All the Things!



Is ChatGPT working for you? or is it the other way around

Last week's most attention-grabbing headline IMO: Employees Are Feeding Sensitive Biz Data to ChatGPT, Raising Security Fears

Employees are submitting sensitive business data and privacy-protected information to large language models (LLMs) such as ChatGPT, raising concerns that artificial intelligence (AI) services could be incorporating the data into their models, and that information could be retrieved at a later date if proper data security isn't in place for the service.

First, you need to understand how these LLMs work. They learn from examples, just like humans do. So the engineers feed them examples -- books, magazine articles, blog posts, cartoons, social media posts -- anything they can find to train these "artificial brains." Then, when the LLM has reached some base level of success (by whatever metric that particular team is working), it is released for the public to play with.

In one case, an executive cut and pasted the firm's 2023 strategy document into ChatGPT and asked it to create a PowerPoint deck. In another case, a doctor input his patient's name and their medical condition and asked ChatGPT to craft a letter to the patient's insurance company.

Did I say "play with"? I mean feed more examples! That's why they are letting you play with it for free. Everything you type into something like ChatGPT is stored in its "brain," and used to help it better respond to another question in the future: "sensitive data ingested as training data into the models could resurface when prompted by the right queries."

Please, train your employees not to use ChatGPT (or any other LLMs that come along in the future) for work purposes, without a very clear understanding of (1) how they work, and (2) whether the data being fed is protected by company policy, NDAs, privacy laws or any other restrictions (note that JPMorgan, Amazon, Microsoft, and Wal-Mart have all issued warnings to employees to take care in using generative AI services). While you're at it, talk to your children, spouses, friends, co-workers, neighbors, extended family, acquaintances and even complete strangers about the danger of divulging private information to these "artificial brains."

Have a secure and private week!

Glenda R. Snodgrass

Glenda R. Snodgrass
grs@theneteffect.com
(251) 433-0196 x107
https://www.theneteffect.com
For information security news & tips, follow me!



Security Awareness Training Available Here, There, Everywhere!

Thanks to COVID-19, lots of things went virtual, including my employee Security Awareness Training. Live training made a comeback a few months ago, but many organizations are retreating. No worries. Wherever you and your employees may be, I can deliver an interesting and informative training session in whatever format you prefer.

Contact me to schedule your employee training sessions. They're fun! ☺

TNE. Cybersecurity. Possible.

Speak with an Expert

Contact

The Net Effect, L.L.C.
Post Office Box 885
Mobile, Alabama 36601-0885 (US)
phone: (251) 433-0196
fax: (251) 433-5371
email: sales at theneteffect dot com
Secure Payment Center

The Net Effect, LLC

Copyright 1996-2024 The Net Effect, L.L.C. All rights reserved. Read our privacy policy