Rivercity Technology Services LTD Logo
About Us
Icon showing a support technician
IT Support Services
Cybersecurity Risk Management at one predictable flat rate.
Icon of light bulbs on a laptop screen
IT Consulting
Business optimization through the smart use of technology.
Icon showing a hand holding a phone
Business Phone Services
VoIP Telephone solutions from RCT. 
Icon showing a database and a cloud
Backups & Recovery
Cloud & On Premise - ready to recover!
Icon showing website wireframes
Website Development & Hosting
Web design and full hosting & maintenance packages!
Icon showing an envelope being opened
Modern Email Management
Microsoft 365 email provisioning, security & management.
Icon of a magnifying glass inspecting binary code on a computer screen
Cybersecurity Risk Assessment
Internal auditing to help identify potential cyber threats.
“You’re giving me the ‘it’s not you, it’s me’ routine? I invented ‘it’s not you, it’s me.’ Nobody tells me it’s them not me; if it’s anybody, it’s me.”
- George Costanza
Learning CenterNewsletterContact Us
Book A Consultation

Cybercriminals Are Using AI to Get Better at Scams

Cybercriminals are taking advantage of the natural-sounding language that AI chatbots can produce to create more convincing scams. AI chatbots can generate text that is indistinguishable from human writing and can keep producing it with minimal human intervention. Criminals have found three main ways to use AI chatbots for malicious purposes.

Firstly, AI-written text is making it more difficult to spot phishing emails, which use fraudulent emails to trick people into downloading malware or divulging sensitive information. Until now, terrible spelling and grammar have made it easy to spot many phishing emails, AI-written text is way harder to spot, simply because it isn’t riddled with mistakes.

And making it even worse, criminals can make every phishing email they send unique, making it harder for spam filters to spot potentially dangerous content.

Secondly, cybercriminals are using AI to spread misinformation and disinformation through social media. It is as simple as typing something like this into AI chatbot programs “Write me ten social media posts that accuse the CEO of the Acme Corporation of having an affair. Mention the following news outlets”. While it may not seem like an immediate threat to you, it could lead to your employees falling for scams, clicking malware links, or even damaging the reputation of your business or members of your team.

Thirdly, AI is being used to create malware because it is pretty good at writing computer code as is getting better all the time! While the creators of AI tools are not responsible for the way in which their software is used, they are working to prevent it from being used maliciously.

To stay ahead of cybercriminals, it is important to educate people about how to spot increasingly sophisticated scams. We need to stay one step ahead of cybercriminals in ALL areas of the Internet. If you require help with this, reach out to us! Cybersecurity is our specialty!

Until next time, keep fit and have fun!

(TYYV) The Yada Yada Version:

Cybercriminals are using AI chatbots to create more convincing scams and yada yada yada it is important to educate people on how to identify and protect themselves from these scams.

Mitch Redekopp
Article Written by Mitch Redekopp

Get in Touch

Need IT Services or Cybersecurity for your business? Have tech questons? Contact us today, we'd love to help you!
Blog Sidebar Contact Form
Related Articles
Rivercity Technology Services LTD logo
We are your IT department. How would you like to manage your risk?
201-116 Research Dr,
Saskatoon, SK
S7N 3R3


Copyright © 2024 - All Rights Reserved