1. Scope of Application
This Policy applies to all individuals, organizations, developers, commercial customers, end users, partners, and third parties using artificial intelligence-enabled features, agents, AI-assisted workflows, content generation features, analysis features, recommendation features, or automation functions relating to artificial intelligence that are provided, integrated, orchestrated, or enabled for connection by HEMIDI JOINT STOCK COMPANY ("Hemidi").
This Policy shall be read together with the applicable Terms of Service, the Acceptable Use Policy, the Privacy Policy, the service-specific terms, the API / Developer Platform Terms, and other applicable documentation. In the event of any conflict, the document specifically applicable to the relevant product, service package, or legal relationship shall prevail to the extent of such conflict.
2. Nature of the AI Service
At the current stage, Hemidi may provide integration layers, interfaces, workflows, orchestration, or connection tools to third-party artificial intelligence services selected by the User or customer. Unless Hemidi clearly states otherwise in writing for a specific Service, nothing in this Policy shall be construed to mean that Hemidi is by default the provider of the foundation model or the party directly operating the entire AI model layer.
AI-enabled features may generate probabilistic outputs, synthetic outputs, recommendations, analyses, summaries, automatically generated content, or model-assisted actions. Due to the nature of artificial intelligence systems, including systems provided by third parties:
- outputs may be inaccurate, incomplete, outdated, misleading, or unsuitable for a specific purpose;
- outputs may not be unique and may be the same as or similar to outputs generated for other users;
- the system may respond differently to similar inputs depending on technical context, model version, safety configuration, or deployment environment.
You must independently assess the suitability of outputs before using, publishing, relying on, or allowing outputs to lead to real-world actions.
3. Human Review Obligations
Unless Hemidi has a separate written agreement or publication, you must not use AI-enabled features provided, integrated, or orchestrated through Hemidi's Service as the sole basis or final decision-making basis in situations that may create legal, financial, medical, employment, insurance, housing, education, security, law enforcement, or similar material consequences for individuals without appropriate independent human review.
You are responsible for:
- designing review, approval, or verification processes appropriate to the risk level of the use case;
- checking facts, source references, calculations, source code, recommendations, or actions generated by the system before use;
- preventing end users from misunderstanding that the system's outputs are inherently equivalent to professional advice, official approval, or human decision-making where that is not the case.
4. Disclosure Obligations to End Users
If you deploy AI-enabled features through Hemidi's Service to interact with end users, end customers, personnel, candidates, patients, learners, or other parties, you must provide appropriate disclosure where the relevant use case so requires, including but not limited to:
- that content, recommendations, responses, or actions are being generated or assisted by an artificial intelligence system;
- that outputs may need to be checked or verified before being relied upon;
- any material limitations that end users need to know in order to use the outputs safely and lawfully.
5. Prohibited Conduct in the AI Context
You must not use AI-enabled features provided, integrated, or orchestrated through Hemidi's Service to:
- develop, support, or operate unlawful conduct, fraud, harassment, deception, manipulation, unauthorized intrusion, or the dissemination of malware;
- generate or disseminate content clearly prohibited by law, including but not limited to child sexual exploitation content, terrorism, human trafficking, or content directly serving serious crime;
- build profiles, score, classify, or make automated decisions about individuals in high-risk contexts without appropriate safeguards, legal basis, and human review;
- impersonate identities, falsify Hemidi's endorsement, or cause others to misunderstand that outputs have been verified, certified, or guaranteed by Hemidi;
- use outputs, system signals, orchestration logic, non-public system behavior, or non-public documentation of Hemidi to build competing artificial intelligence models or products contrary to the applicable documentation.
6. Generated Content and User Responsibility
As between Hemidi and you, rights in inputs and outputs are determined under the applicable Terms of Service, product configurations, and relevant commercial documentation. This Policy does not independently create any intellectual property rights beyond those set out in the applicable documentation.
You are responsible for:
- the legality of the data, prompts, documents, source code, images, or other content that you input into the system;
- reviewing outputs before publishing, integrating, sharing, commercializing, or using them as the basis for real-world action;
- ensuring that outputs are not misrepresented, do not infringe third-party rights, and do not violate applicable law.
7. Data Used to Improve AI Systems
Whether Hemidi or a third-party AI provider processes data to train, fine-tune, or improve models, features, safety mechanisms, or system performance shall be determined in accordance with the Privacy Policy, the applicable Terms of Service, product configurations, the provider selected by the User, commercial agreements, and any documentation specifically applicable to that Service.
If a Service includes an opt-out mechanism, opt-in mechanism, dedicated data configuration, minimum-retention mode, or dedicated data isolation mechanism, the documentation applicable to that Service shall prevail to the relevant extent.
8. Safety, Monitoring, and Enforcement
Hemidi reserves the right to, at its discretion, apply safety filters, abuse detection mechanisms, rate limits, feature limits, warning labels, review mechanisms, automatic blocking mechanisms, suspensions, or other technical or organizational measures to protect the system, Users, or third parties.
If Hemidi has reasonable grounds to believe that your use case creates safety risk, legal risk, privacy risk, abuse risk, or material operational risk, Hemidi reserves the right to:
- require you to provide additional information regarding the use case;
- limit, disable, or revoke some or all access to the relevant AI features;
- require adjustments to usage processes, monitoring processes, or disclosures applicable to end users;
- suspend or terminate usage rights under the applicable Terms of Service.
9. Policy Updates
Hemidi reserves the right to update this Policy from time to time to reflect changes in products, risk models, legal obligations, deployment architecture, or operational requirements. If a change materially affects how Hemidi implements governance for AI systems, Hemidi may provide notice through the Service, product documentation, in-system notices, or another official channel.
10. Contact
If you have questions about this Policy or wish to report risky use cases, safety incidents, abuse, or violating content relating to AI-enabled features on Hemidi's Service, you may contact support@hemidi.com or another contact point officially published by Hemidi on the Service from time to time.