Artlogic take AI-related data concerns very seriously and have implemented several measures to safeguard our clients' content. This article explains the measures we take to ensure we protect our clients' data.
Our Approach
Preventing AI Training
- Our websites include a robots.txt file that instructs web scrapers and bots, including AI training bots like Facebook's, not to use our content. While this isn't a foolproof guarantee since some bots can bypass this, it is a key preventative step.
- Additionally, we monitor our platform and actively block suspicious traffic to reduce unauthorized data usage.
Client-Controlled Tools
-
If you are the rights owner, you can use tools like the browser extension from spawing.ai, which allows you you to submit your content to the Do Not Train Registry. This registry, supported by the EU, is recognized by many popular AI companies and helps protect your content from being used in AI training datasets.
Platform Policies & Practices
- Our platform does not send any data to AI tools.
- We have strict internal policies and training to ensure our engineers do not use client data in AI tools.
- We also continually monitor and enhance our processes to stay ahead of potential AI-related risks.
These efforts are part of our ongoing commitment to protecting your content and providing transparency in how we handle AI risks. If you have any additional concerns, we’re happy to address them further.