Vygo logo
Solutions
First year student experience
Mentorship
Peer communities
Career services & alumni
University support management
Platform
Support management
Cohort micro ecosystem
Enterprise support
Resources
WebinarsPodcastsCase studiesPeer mentoring toolkitBuyers guideWhitepaperSFS policy
Blog
Student loginBook a demo
menuclose
Legal

AI Policy

‍
Effective Date: November 2025 Version: 1.0

Last Review: November 2025
Policy Owner: Chief Architect

1. ABOUT THIS POLICY
This policy explains how Vygo uses artificial intelligence (AI) in its platform, how we protect your data, and what responsibilities apply when using AI features.
Vygo’s approach is guided by transparency, privacy, security, fairness, and human oversight. In Europe, we follow the transparency and deployer requirements of the EU Artificial Intelligence Act (EU AI Act) and data-sharing, access, and use principles under the EU Data Act and in the United States, we adhere to Federal Trade Commission (FTC) guidance on the fair, transparent, and accountable use of AI systems.

‍
Our AI tools, such as sentiment analysis and automated message translation, are designed to improve communication and user experience while keeping your data secure. These tools support communication and learning but may not always be fully accurate or complete, may contain errors or omissions and be subject to bias. You are under no obligation to use AI tools, or to follow or use any of the output of those tools. You are solely responsible for AI tools which you decide to use, including ensuring that your use of them does not violate any applicable laws or the Vygo Terms of Service.
Vygo complies with major privacy laws, including the EU and UK GDPR, the Australian Privacy Act 1988 (Cth), and US privacy laws such as the CCPA/CPRA. Where we handle student data for US institutions, we also meet the requirements of FERPA and, where applicable, COPPA.

‍
See our
Privacy Policy for more information.

‍
This policy forms part of Vygo’s wider privacy and compliance framework.

Scope:
This policy applies to all AI features within the Vygo platform and to all users, including administrators, mentors, and students.
It covers:
● The design, management, and responsible use of AI in Vygo’s products;
● Vygo’s privacy, security, risk-management, and transparency standards for AI; and
● User and partner responsibilities when using AI features.

This policy does not apply to Vygo’s internal product development activities that do not process user data. This policy is provided for transparency only and does not create any legal or contractual obligations for Vygo beyond those in its customer agreements.

2. AI IN EDUCATION AND MENTORSHIP
2.1 Vygo supports the ethical, human-centred use of AI to strengthen, not replace, real relationships in education. AI helps mentors and institutions enhance student support, engagement, and learning outcomes.
2.2 Certain analytics features may in future assess the quality of support interactions to help administrators improve mentor performance.
2.3 Customers must not use Vygo’s analytics to make or influence such decisions. If a customer configures features that could constitute automated decision-making with legal or similarly significant effects, the customer must implement appropriate safeguards and notify Vygo before activation for confirmation of compatibility with the platform’s intended use.

3. DEFINITIONS
Artificial Intelligence (AI): Systems that perform tasks normally requiring human intelligence, such as language or translation models.
AI System: As defined under Article 3(1) of the EU AI Act: a machine-based system that infers, from inputs it receives, how to generate outputs such as predictions, content, or decisions influencing physical or virtual environments.
Children under 13 (COPPA): Where applicable, Vygo uses COPPA-compliant measures, including verified parental consent, for child-directed services.
De-identification: The process of removing or altering personal identifiers so that data cannot reasonably be used to identify an individual, consistent with APP 11.2 and Recital 26 GDPR.
FERPA: means Family Educational Rights and Privacy Act.
FTC AI Guidance: The principles and expectations issued by the U.S. Federal Trade Commission on transparency, fairness, accountability, and avoidance of deceptive or unfair AI practices.
Generative AI: AI Systems capable of creating content such as text, code, or images based on prompts or training data.
Machine Learning Content: User inputs (“Submissions”) and the AI-generated outputs (“Generated Responses”) created through Vygo’s platform.
Personal Information: As defined under the Australian Privacy Act 1988, and equivalent to “Personal Data” under the EU/UK GDPR and “Personal Information” under the CCPA/CPRA.
Processing: Any operation performed on Personal Data, including collection, storage, disclosure, or deletion, as defined under Article 4(2) GDPR.
Sentiment Analysis: Text-based analysis of chat data , de-identified where possible, to provide aggregated insights into user sentiment and engagement trends. No biometric data is used.
Student Education Records: As defined under FERPA, means records directly related to a student and maintained by an educational agency/institution or its provider acting as a school official.
Translation Tools: AI Systems that enable real-time translation of user messages between supported languages to improve accessibility and inclusion.
User: Any person or organisation using Vygo’s AI features.
User Data: Data provided to or collected by Vygo on behalf of a User including Personal Information and Student Education Records processed as a “school official” under FERPA.
US State Privacy Laws: The CCPA/CPRA (California), VCDPA (Virginia), CPA (Colorado), and other equivalent state privacy frameworks applicable to personal data processing.
Other defined terms not defined herein have the meanings set out in our Privacy Policy.

4. GOVERNANCE AND AI MANAGEMENT
4.1 Infrastructure: Vygo’s AI systems operate within our secure Cloud based- environment managed by approved providers.
4.2 SOC 2 Type II Compliance: AI features follow the same SOC 2 Type II controls that apply to the rest of the Vygo platform.
4.3 Regional Data Handling: User Data is stored in the customer’s contracted region in accordance with our Privacy Policy. AI processing may occur cross-region to ensure access to secure, up-to-date models.
4.4 Vygo monitors its AI features to help ensure consistent and reliable performance as part of its standard product-development and security processes.
4.5 Risk and Impact Assessments: Vygo reviews AI integrations for privacy, security, and ethical risks and completes Data Protection or Data Transfer Impact Assessments where required by law.
4.6 Incident Handling: Vygo treats any incident involving AI features with the same level of diligence and security as other data incidents. Where required by applicable law, Vygo will notify affected customers and regulators in accordance with applicable breach-notification obligations.
4.7 U.S. AI and Privacy Compliance: Vygo’s governance framework is designed to align with applicable U.S. privacy and consumer protection laws, including FERPA, COPPA, and the CCPA/CPRA, and to uphold FTC principles on fairness, transparency, and accountability in the use of AI.

5. TRANSPARENCY AND AI FEATURES
5.1 Disclosure: Vygo informs users whenever an AI System is available for use within the platform in line with the disclosure and explainability requirements of the EU AI Act, UK ICO guidance, and FTC AI transparency standards. Look for the AI logo when using AIenabled features.
5.2 Information Requests: Customers may request details about an AI feature’s purpose, data sources, and safeguards.
5.3 Feedback and Review: Vygo may review limited, feedback or usage data to help maintain and improve AI feature performance. Any such review is subject to Vygo’s internal privacy and security controls.
5.5 AI Interaction Notice: Vygo provides clear notice of AI use, purpose, and relevant data categories when users engage with AI features.
5.6 AI Features
Vygo releases new AI Features over time.
a) Chat Sentiment Analysis: Vygo may use Sentiment Analysis to provide aggregated insights into student or cohort sentiment and engagement.
b) AI Translation: Vygo’s translation tools enable messages to be translated into supported languages to improve accessibility and inclusion. Translations may contain inaccuracies. Vygo disclaims liability for translation errors or misinterpretations. Message data processed for translation is not used to train AI models. Where translation is enabled for personal data, processing is limited to translation purposes and subject to lawful basis and contract requirements.

6. RESPONSIBILITIES AND ACCEPTABLE USE
6.1 Users (including administrators, mentors, and students) are responsible for how they use Vygo’s AI features made available to them through their organisation.
6.2 Only upload or process information you are authorised to use.
6.3 Avoid entering confidential, sensitive, or personal data unless your contract and configuration expressly permit it (e.g., institution-enabled translation).
6.4 Where applicable, ensure a lawful basis (GDPR/UK GDPR) or authority under FERPA/COPPA.
6.6 Do not input:
• Personal data unless authorised and lawful;
• Confidential, proprietary, or client information;
• Source code or third-party IP without permission.
6.7 Users should review AI-assisted outputs before publication, distribution, or client use.
6.8 AI-generated responses may not always be accurate or complete; users should validate outputs before relying on them. Vygo does not guarantee the accuracy or completeness of AI-generated content and is not responsible for any reliance placed on such content, subject to non-excludable rights under consumer law.
6.9 Institutions must ensure their use aligns with academic-integrity and assessment policies.
6.10 Users must immediately report misuse, unauthorised disclosure, or AI-related incidents to privacy@vygoapp.com.
6.11 Allowed with care: drafting messages, summarising non-confidential data, internal idea generation.
6.12 Requires approval: any use involving personal, student, or confidential data.
6.13 Prohibited: uploading secrets, assessment data, or sensitive information into public/nonapproved tools.
6.14 Misuse may result in suspension of access or disciplinary action.
6.15 Users must not use AI to generate or transmit material violating Vygo’ Terms of Use or Acceptable Usage Policy.

7. DATA AND PRIVACY PRINCIPLES
7.1 No user data is used to train any large language model.
7.2 All processing complies with SOC 2, GDPR/UK GDPR, the Australian Privacy Act 1988 (including the APPs), and applicable U.S. privacy and consumer protection laws, including the CCPA/CPRA, VCDPA, and FTC guidance on AI. For US education customers, Vygo acts as a FERPA “school official” with a legitimate educational interest and ensures AIenabled processing involving children complies with COPPA..
7.3 Vygo incorporates privacy-by-design principles in the development of AI features.
7.4 AI data handling is fully integrated within Vygo’s Cloud infrastructure, ensuring regional storage and contractual compliance.
7.5 Cross-border transfers are managed under Vygo’s Data Processing Addendum using appropriate safeguards, including EU Standard Contractual Clauses (2021) or their UK equivalents, and may rely on the EU-US Data Privacy Framework (and UK Extension) where our US subprocessors are certified.

8. INCIDENT MANAGEMENT
8.1 Vygo follows its internal data breach policy as part of its SOC 2 certification framework. Users are required to report any suspected misuse or exposure immediately. Where required by law, Vygo will notify affected customers, partners, or regulators in accordance with applicable data-breach notification laws, including under the GDPR/UK GDPR, FERPA breach notification guidance, COPPA parental notice requirements, and relevant US state laws. Root-cause analysis and remediation measures are documented and retained.

9. MONITORING AND REVIEW
Vygo reviews its AI Systems periodically, and at least annually, for fairness, accuracy, and ethical risk. The Chief Architect oversees these reviews as part of Vygo’s AI Governance Framework. We also review updates to the EU AI Act implementation and ICO/OAIC guidance and adjust this Policy as necessary.

10. REPORTING NON-COMPLIANCE
10.1 Vygo encourages anyone to report suspected breaches of this policy or misuse of AI Systems.
10.2 Who can report: Any Vygo employee, contractor, university partner, or user aware of potential non-compliance.
10.3 How to report: To the Information Security Officer, Chief Privacy Officer, or privacy@vygoapp.com.
10.4 Confidentiality: Reports are handled confidentially and disclosed only to those investigating.
10.5 Non-retaliation: No person will face adverse treatment for reporting a concern in good faith.
10.6 Follow-up: All reports are investigated promptly, with corrective or disciplinary action taken where appropriate.

Vygo logo
Increase your engagement and decrease your dropouts.
twitter logolinkedin logo
Solutions
First-year student experience
Mentorship
Peer communities
Career services & alumni
University support management
Platform
Support management
Cohort support ecosystem
Enterprise support ecosystem
Company
Our story
Blog
ContactJoel AvatarHannah AvatarSteven Avatar
LegalAccessibility Statement
Login