AI Transparency & Responsible Use Statement
Last updated: 14 January 2026
At BytYogy, we believe artificial intelligence should augment human judgment, not replace it. This statement explains how Astra uses AI responsibly, transparently, and within clearly defined limits.
1. Purpose of AI in Astra
Astra is a decision-support system.
It is designed to help users:
- Reflect with greater clarity
- Understand timing, context, and patterns
- Plan actions with improved situational awareness
Astra does not make decisions on behalf of users and does not provide authoritative, deterministic, or guaranteed outcomes.
2. How AI Is Used
Astra uses AI to:
- Analyze user-provided context and historical interactions
- Synthesize analytical, temporal, and astronomical datasets
- Generate probabilistic, plain-language insights
All AI-generated outputs are informational and advisory in nature, not instructions.
3. What Astra Does Not Do
Astra does not:
- Guarantee results, outcomes, or success
- Replace professional advice (including financial, legal, medical, or therapeutic advice)
- Perform autonomous decision-making with legal, financial, or economic consequences
- Engage in behavioral manipulation, coercive persuasion, or psychological profiling
All final decisions, actions, and interpretations remain entirely with the user.
4. Explainability & User Control
We prioritize transparency and user agency:
- Outputs are written in clear, understandable language
- Reasoning is framed around observable timing logic, patterns, and probabilities
- Users can review historical guidance and prior context
- Users may edit or delete their stored context at any time
Astra is designed to support informed reflection, not opaque automation.
5. Data Responsibility
BytYogy follows strict data responsibility principles:
- User data is never sold
- Context memory is used solely to improve relevance and continuity
- Data is stored securely using industry-standard infrastructure
- Access is limited to essential systems only
For full details, please refer to our Privacy Policy.
6. Model Limitations
Users acknowledge that AI systems:
- Operate under uncertainty
- May produce incomplete or imperfect outputs
- Require human judgment and critical thinking
Astra should be treated as a decision companion, not an authority or source of truth.
7. Continuous Improvement
We continuously:
- Monitor system behavior and outputs
- Refine prompts, safeguards, and evaluation criteria
- Improve clarity, consistency, and usefulness
User feedback directly informs system improvements.
8. Governing Law
This Statement shall be governed by and construed in accordance with the laws of the State of Delaware, United States, without regard to conflict-of-law principles.
9. Contact
Questions regarding AI usage, transparency, or responsible design may be sent to:
📧 contact@bytyogy.com