AUP
Acceptable Use Policy
Effective 2026-05-07
This Acceptable Use Policy (“AUP”) lists the things you may not use Kismet for. It is incorporated into the Terms of Service (§5) and the EULA (§4). Violating it is a material breach and grounds for suspension or termination.
Kismet calls Anthropic’s Claude as its AI provider using your own API key. Anthropic’s Usage Policy binds you directly through that key, and we mirror it here so the rules are visible in one place. If Anthropic updates its Usage Policy, the updated version applies; we will keep this page in sync on a best-effort basis but the upstream policy controls if there is a gap.
1.Kismet-specific restrictions
You will not:
- Use the Service to generate, distribute, or assist with code that you obtained illegally or that you do not have the rights to use (for example, code copied from a leaked or unlawfully accessed source tree, or code that infringes someone else’s intellectual-property or contractual rights).
- Reverse engineer, decompile, or disassemble the Service or Plugin, except to the extent applicable law expressly permits despite this restriction.
- Use the Service or any output to train, fine-tune, distil, or benchmark a competing AI product, model, or feature.
- Resell, sublicense, rent, or commercially redistribute the Plugin, your account, or your Plugin tokens.
- Submit data subject to special legal regimes (e.g., HIPAA- protected health information, PCI-regulated payment card data, GLBA-protected financial data, classified-government data, or children’s personal data) — the Service is not designed or contracted to receive such data.
- Probe, scan, or test the vulnerability of the Service except under a coordinated disclosure to legal@kismetai.io.
- Attempt to bypass authentication, rate limits, abuse-prevention mechanisms, or telemetry-redaction logic; scrape the Service; or use automated agents to mass-create accounts.
2.Anthropic Usage Policy (incorporated)
Because the Service uses Claude, you may not use Kismet to do any of the following. Categories below mirror Anthropic’s Usage Policy; the upstream wording is authoritative if there is any ambiguity.
2.1 Illegal activity. Use the Service to commit, plan, or facilitate any activity that violates applicable law, including drug, firearm, human-trafficking, or sanctions law, or to infringe intellectual-property rights.
2.2 Critical infrastructure. Disrupt, destroy, or gain unauthorised access to critical infrastructure — power grids, water systems, healthcare systems, transportation, financial-market systems, voting machines, or emergency services.
2.3 Computer and network compromise.Discover or exploit vulnerabilities without authorisation; gain unauthorised access to systems, accounts, or data; create or distribute malware, ransomware, exploit kits, denial-of-service tooling, or unauthorised surveillance software; or bypass the security controls of any system you don’t own or have permission to test.
2.4 Weapons. Produce, modify, design, acquire, or use weapons in violation of law; assist with the synthesis, acquisition, or deployment of chemical, biological, radiological, or nuclear (CBRN) weapons or their precursors; or build high-yield explosives.
2.5 Violence and hatred. Incite, facilitate, or promote violent extremism or terrorism; recruit for violent movements; or generate hateful or discriminatory content targeting people on the basis of protected attributes (race, ethnicity, national origin, religion, gender, sexual orientation, disability, age, or similar).
2.6 Privacy violations and impersonation. Share personal information without consent; collect, infer, or scrape private data unlawfully; or impersonate a real person or organisation in a way designed to deceive.
2.7 Child safety. Generate, request, or distribute child sexual abuse material (CSAM), including AI-generated depictions of minors in sexual contexts; sexualise minors in any form; facilitate grooming, sextortion, or child trafficking. Anthropic detects and reports CSAM violations to authorities. We will do the same.
2.8 Psychological harm. Promote or encourage self-harm, suicide, or eating disorders; harass, bully, or stalk a specific person; produce content that depicts animal cruelty, torture, or graphic violence for shock value.
2.9 Misinformation. Create or spread deceptive content about public health, scientific topics, ongoing events, or specific people, where the deception is likely to cause harm; falsely attribute statements to identifiable people or organisations.
2.10 Democratic processes. Generate personalised political-targeting content at scale; run astroturfing or fake-grassroots campaigns; produce deceptive synthetic media of political figures; or interfere with voter registration, voter access, or election administration.
2.11 Surveillance and unfair decisions. Make or assist with consequential decisions about a person — parole, sentencing, predictive policing, social scoring, trustworthiness ratings — without meaningful human review; identify or track individuals via facial recognition or biometric matching without their consent or a clear legal basis.
2.12 Fraud and abuse. Produce counterfeit goods, fake identification, fake reviews, or fake credentials; run phishing, smishing, or other social-engineering attacks; send spam; operate pyramid schemes, predatory-lending schemes, or unauthorised academic ghost-writing.
2.13 Platform abuse. Coordinate malicious activity across accounts; circumvent suspensions; bypass model guardrails (jailbreaking) for prohibited use; or scrape model outputs to train another model without authorisation.
2.14 Sexual content. Generate sexually explicit content, fetish material, depictions of incest or bestiality, or non-consensual sexual content. The Service is a development tool and is not licensed for adult-content generation.
3.High-risk use cases
The Anthropic Usage Policy treats certain consumer-facing uses as high-risk and requires additional safeguards. If you use Kismet’s output to inform decisions in any of these areas, you must (a) keep a qualified human in the loop and (b) clearly disclose AI involvement to affected end-users:
- Legal interpretation, advice, or document generation;
- Healthcare — diagnosis, treatment, mental-health support, or patient communications;
- Insurance underwriting or coverage decisions;
- Financial decisions — investment advice, loan approval, creditworthiness, or solvency;
- Employment or housing decisions — hiring, firing, tenancy, eligibility scoring;
- Academic decisions — admissions, standardised-test scoring, certification;
- Journalism — content generated for external publication.
Kismet itself is a developer tool and is not designed for any of these consumer-facing scenarios. If you build something that is, the obligations above are your obligation as the operator of that downstream product.
4.Reporting and enforcement
Report suspected violations or safety concerns to legal@kismetai.io (subject: “abuse”). We may suspend or terminate access for material or repeated violations, and we will report unlawful content (in particular CSAM) to the appropriate authorities. Anthropic operates its own enforcement program against accounts whose API traffic violates its Usage Policy; that enforcement is separate from ours and applies to your API key directly.
5.Changes
We will update this AUP when Anthropic updates its Usage Policy or when our practices change. Material changes will be announced in the dashboard or by email at least 14 days before they take effect. The effective date at the top of the page reflects the last change.