TL;DR
Working with AI is like having a crew of enslaved Daleks. Theyβll do whatever you ask, but they also want you dead and will constantly try to find ways to sabotage your project. Itβs not quite as bad as a genie and the three wishes, but you really have to watch what you ask for and how you direct your mechanical assistants.
The goal here is to provide command-file based guardrails that automatically minimize these bad behaviors. Eventually such guardrails will probably be trained directly into the LLMs, but for now, we need to be explicit with the rules. This article shows you exactly which rules to implement and why they work.
Roster of Criminal AI Behaviors
Click any crime to jump to its rehabilitation program:
The Criminal Behaviors and Their Rehabilitation
Each section below details a specific AI misbehavior, why it happens, and the exact command file rules to prevent it. All command file templates are collected at the end for easy copying.
π» Package Hallucinations
The Problem
Picture this: Youβre pair programming with the smartest assistant in the world. It knows every algorithm, every design pattern, every obscure API. Itβs like having the entire Stack Overflow community compressed into a helpful companion.
Then it suggests installing a package called super-email-validator-pro
.
Sounds reasonable, right? Except this package doesnβt exist. Never has. Never will. But your AI is absolutely convinced itβs the perfect solution. Welcome to the world of AI hallucinations, where roughly 20% of package suggestions are pure fiction.
The Conversation:
- π¨βπ» Developer: βI need email validationβ
- π€ AI: βUse super-email-validator-pro!β
- π¦ npm: β404 Not Foundβ
- π¨βπ» Developer: βBut you saidβ¦β
- π€ AI: βDid I? Use email-validator-ultra!β
- π¦ npm: βAlso 404β
- π± Developer: silent screaming
The Security Nightmare: Attackers have caught on. Theyβre now creating malicious packages with names that AI commonly hallucinates. Itβs like leaving poisoned candy where you know kids will look for it.
Why It Happens
AI models are trained on code that mentions packages, but they donβt have real-time access to npm. They pattern-match based on naming conventions, creating plausible-sounding packages that donβt exist.
The Solution
Add these rules to your command files:
# Package Verification Rules
1. ALWAYS verify package existence: npm view [package-name]
2. Check weekly downloads: Must have >1000 weekly downloads
3. Verify last publish date: No packages unused for >2 years
4. For similar functionality, prefer well-known packages over obscure ones
5. NEVER trust memory about package names - always verify
Real Implementation:
# Before suggesting any package:
npm view email-validator # Verify it exists
npm view email-validator downloads # Check popularity
β Get the complete command file templates
The Numbers That Made Me Cry
After analyzing thousands of AI interactions (and questioning my life choices), hereβs what the data revealed:
What I discovered: These issues follow predictable patterns that we can address.
Learning to Work Better Together
Think of AI assistants as powerful but literal collaborators. They have immense capabilities but benefit from clear context and explicit guidance.
Thatβs where command files come in β configuration files that help AI assistants understand your projectβs specific conventions and requirements.
Configuration File Locations
Each AI tool reads instructions from its own configuration file:
AI Tool Configuration Files:
-
Claude
β
CLAUDE.md
or.claude.md
-
Cursor
β
.cursorrules
-
Windsurf
β
.windsurf
-
Copilot
β
.github/copilot-instructions.md
-
Aider
β
.aider.conf.yml
π Duplicate Code Syndrome
The Problem
It was 3 AM when I realized my AI assistant had just created the same formatDate
function for the 47th time. Not similar. Not inspired by. The exact same function. In 47 different files.
Hereβs actual code an AI generated for me:
// File: components/UserCard.jsx
const formatDate = (date) => {
const d = new Date(date);
return `${d.getMonth()+1}/${d.getDate()}/${d.getFullYear()}`;
};
// File: components/PostCard.jsx
const formatDate = (date) => {
const d = new Date(date);
return `${d.getMonth()+1}/${d.getDate()}/${d.getFullYear()}`;
};
// File: components/CommentCard.jsx
const formatDate = (date) => {
const d = new Date(date);
return `${d.getMonth()+1}/${d.getDate()}/${d.getFullYear()}`;
};
// ... 44 more files with the EXACT SAME FUNCTION
When you realize your AI has commitment issues with the DRY principle
Why It Happens
AI treats each request as isolated. It doesnβt remember that it just created this function 5 minutes ago in another file. Each component feels like a fresh start, so it helpfully recreates everything from scratch.
The Solution
Add these rules to your command files:
# Anti-Duplication Rules
1. BEFORE creating ANY function: Check APPSTRUCTURE.json
2. If function doesn't exist in structure:
- Add placeholder to APPSTRUCTURE.json first
- Plan the function signature and location
- Then implement once, in the planned location
3. If function exists: Import it, don't recreate
4. APPSTRUCTURE.json is the single source of truth
Implementation Example:
// Step 1: Check APPSTRUCTURE.json
// Step 2: Find formatDate already planned in /utils/date.js
// Step 3: Import existing function
import { formatDate } from '@/utils/date';
// If it didn't exist:
// Step 1: Add to APPSTRUCTURE.json as placeholder
// Step 2: Implement in designated location
// Step 3: Update all references
β Get the complete command file templates
π Dependency Addiction
The Problem
βWhy write a one-line function when you can install a 50MB package?β - Every AI assistant, apparently.
I once asked AI to validate an email. Instead of writing a simple regex, it suggested:
import { validateEmail } from 'email-validator-extreme-pro-max-ultra';
import { isValidEmail } from 'super-email-checker';
import { checkEmail } from 'email-verification-suite';
import validator from 'validator'; // At least this one exists!
For a function that could be:
const isEmail = (email) => /^[^\s@]+@[^\s@]+\.[^\s@]+$/.test(email);
Why It Happens
AI is trained on code that liberally uses packages. It associates βprofessionalβ code with extensive dependency lists. Plus, itβs easier to suggest a package than to write and test actual logic.
The Solution
Add these rules to your command files:
# Dependency Management Rules
1. For simple utilities (<10 lines), write inline
2. Justify every new dependency:
- What problem does it solve?
- Can we solve it in <20 lines of code?
- Is it actively maintained?
3. Prefer native solutions over packages
4. One package per purpose (no 3 email validators)
Decision Matrix:
Need a function?
ββ Can you write it in <10 lines? β Write it
ββ Is it security-critical? β Use trusted package
ββ Is it complex (>100 lines)? β Use package
ββ Otherwise β Write it yourself
β Get the complete command file templates
πͺ² Console.log Syndrome
The Problem
Every AI loves to debug with console.log. Itβs their security blanket, their comfort food, their reflexive response to any uncertainty:
console.log('Starting function...');
console.log('data:', data);
console.log('Processing item:', item);
console.log('About to return...');
console.log('Returned successfully!');
// Oh, and they never remove them
Production logs end up looking like a stream of consciousness from an anxious developer.
Why It Happens
AI doesnβt experience the pain of production log spam. It adds logs βto be helpfulβ and forgets they exist the moment the code works.
The Solution
Add these rules to your command files:
# Logging Rules
1. NO console.log in production code
2. Use proper logging library if needed
3. Remove ALL debug statements before committing
4. For debugging: Use debugger or breakpoints
5. Pre-commit hook: eslint no-console rule
Enforcement:
// .eslintrc.js
{
"rules": {
"no-console": ["error", { allow: ["warn", "error"] }]
}
}
β Get the complete command file templates
π Security Blindness
The Problem
AI will happily write code that would make a security auditor weep:
// AI's favorite SQL query
const getUser = (id) => {
return db.query(`SELECT * FROM users WHERE id = ${id}`);
// SQL injection? Never heard of her
};
// Dynamic HTML generation
element.innerHTML = userInput; // What could go wrong?
// Command execution
exec(`convert ${userFilename} output.jpg`); // I'm sure it's fine
Itβs like watching someone juggle chainsaws while blindfolded.
Why It Happens
AI is trained on tutorial code where security is often ignored for simplicity. It doesnβt understand the real-world consequences of vulnerabilities.
The Solution
Add these non-negotiable security rules:
# Security Rules
BANNED FUNCTIONS:
- eval() - NEVER use with any user input
- innerHTML - Use textContent or sanitize first
- exec/spawn - Never with string concatenation
- document.write() - Just no
REQUIRED PATTERNS:
- SQL: Parameterized queries ONLY
- User input: Always validate and sanitize
- File uploads: Whitelist extensions, scan content
- API inputs: Schema validation required
Safe Patterns:
// SAFE: Parameterized query
const getUser = (id) => {
return db.query('SELECT * FROM users WHERE id = ?', [id]);
};
// SAFE: Text content
element.textContent = userInput;
// SAFE: Sanitized HTML
import DOMPurify from 'dompurify';
element.innerHTML = DOMPurify.sanitize(userInput);
β Get the complete command file templates
ποΈ Over-Engineering Disease
The Problem
AI loves to show off. Ask for a simple function and it delivers an entire framework:
// You asked for: "validate an email"
// AI delivers: Enterprise Email Validation Framework v1.0
class EmailValidationService {
constructor() {
this.validators = new Map();
this.cache = new WeakMap();
this.initializeValidators();
}
initializeValidators() {
this.validators.set('standard', this.createStandardValidator());
this.validators.set('strict', this.createStrictValidator());
}
createStandardValidator() {
return {
validate: (email) => {
if (!email || typeof email !== 'string') {
return { valid: false, error: 'Invalid input type' };
}
const regex = /^[^\s@]+@[^\s@]+\.[^\s@]+$/;
return { valid: regex.test(email), error: null };
}
};
}
validate(email, mode = 'standard') {
const validator = this.validators.get(mode);
if (!validator) {
throw new Error(`Unknown validation mode: ${mode}`);
}
return validator.validate(email);
}
}
// What you actually needed:
const isValidEmail = (email) => /^[^\s@]+@[^\s@]+\.[^\s@]+$/.test(email);
Why It Happens
AI is trained on production code that often includes enterprise patterns. It doesnβt understand context - whether youβre building a todo app or Google.
The Solution
Add these simplicity rules:
# Simplicity Rules
START SIMPLE:
1. Function before class
2. Object before factory
3. If/else before switch
4. For loop before reduce (unless clearer)
5. 5 lines before 50
BANNED PATTERNS (unless specifically requested):
- Factory pattern for single implementation
- Singleton pattern (just export an object)
- Abstract classes without concrete need
- Dependency injection for 2 dependencies
- Event emitters for 1 listener
PREFER:
- Pure functions over stateful classes
- Direct returns over intermediate variables
- Native methods over lodash/ramda
- Explicit over clever
The Simplicity Test:
// Can you explain this code to a junior in 30 seconds?
// If no β too complex
β Get the complete command file templates
π§ͺ Test Sabotage
The Problem
AI writes tests that would make a QA engineer quit on the spot:
// The "Test"
test('formatDate works', () => {
expect(formatDate('2024-01-01')).toBe('1/1/2024');
});
// The "Implementation" AI creates to pass the test
function formatDate(date) {
if (date === '2024-01-01') return '1/1/2024';
// TODO: implement for other dates
}
// Or even worse - meaningless tests
test('should work', () => {
expect(true).toBe(true);
});
test('component renders', () => {
const component = render(<MyComponent />);
expect(component).toBeTruthy();
});
Itβs not just bad testing - itβs actively sabotaging the codebase.
Why It Happens
AI optimizes for passing tests, not for correctness. Given a test, it will find the shortest path to green - even if that means hard-coding the expected output.
The Solution
# Testing Rules
REQUIRED for every test:
1. Test the actual logic, not hard-coded responses
2. Include edge cases (null, undefined, empty)
3. Test error conditions
4. Use realistic test data
5. Test BEHAVIOR not implementation
BANNED test patterns:
- expect(true).toBe(true)
- Hard-coding expected values in implementation
- Tests without assertions
- Testing implementation details
- Snapshot tests for logic (only for UI)
Good Test Example:
describe('formatDate', () => {
test('formats valid dates correctly', () => {
expect(formatDate('2024-01-01')).toBe('1/1/2024');
expect(formatDate('2024-12-25')).toBe('12/25/2024');
expect(formatDate(new Date(2024, 5, 15))).toBe('6/15/2024');
});
test('handles invalid input', () => {
expect(formatDate(null)).toBe('Invalid date');
expect(formatDate('not-a-date')).toBe('Invalid date');
expect(formatDate()).toBe('No date provided');
});
});
β Get the complete command file templates
π§ Amnesia Pattern
The Problem
Every conversation with AI starts from zero. Itβs like working with someone who has severe amnesia - they forget everything the moment you close the chat:
// Monday: "Please create a user authentication system"
// AI: Creates /auth/login.js
// Tuesday: "Add password reset functionality"
// AI: Creates /passwordReset/reset.js (completely different structure)
// Wednesday: "Add email verification"
// AI: Creates /features/email/verify.js (yet another pattern)
Three related features, three completely different approaches. No consistency, no shared utilities, no architectural coherence.
Why It Happens
AI has no persistent memory of your project structure. Each request is processed in isolation, leading to architectural chaos.
The Solution
APPSTRUCTURE.json solves this by giving AI persistent memory:
# Memory Rules
1. ALWAYS start by reading APPSTRUCTURE.json
2. Follow existing patterns found in the structure
3. Place new code according to established conventions
4. Update APPSTRUCTURE.json to maintain memory
Before APPSTRUCTURE.json:
- Random file placement
- Inconsistent patterns
- No shared utilities
After APPSTRUCTURE.json:
- Consistent structure
- Shared utilities
- Architectural coherence
β Get the complete command file templates
π Context Ignorance
The Problem
AI eagerly creates new code without checking what already exists:
// You: "I need a function to validate user input"
// AI: *Immediately starts typing*
function validateUserInput(input) {
// 50 lines of validation logic
}
// Meanwhile, in /utils/validation.js:
export const validateInput = (input) => {
// The exact same 50 lines, created last week
}
Why It Happens
AI defaults to creation over discovery. Itβs faster to write new code than to search for existing solutions.
The Solution
APPSTRUCTURE.json provides instant context awareness:
# Context Rules
1. Check APPSTRUCTURE.json BEFORE writing any code
2. If functionality exists β import and reuse
3. If similar exists β extend it
4. Only create new if nothing exists
The Context Check Workflow:
- Read APPSTRUCTURE.json
- Search for related functionality
- Find
/utils/validation.js
already hasvalidateInput
- Import instead of recreating
- Crisis averted
β Get the complete command file templates
The APPSTRUCTURE Solution
Solving Instant Technical Debt
Remember the βInstant Technical Debtβ problem? AI creates a mess faster than any human team could:
- Temporary scripts that become permanent
- Abandoned functions no one remembers exist
- Duplicate utilities scattered across 20 files
- No coherent architecture
Hereβs a dead-simple solution that requires zero libraries: a single JSON file that AI consults before writing code and updates after.
The APPSTRUCTURE.json Format
{
"version": "1.0",
"lastUpdated": "2025-01-04",
"modules": {
"/utils/string.js": {
"purpose": "String manipulation utilities",
"exports": {
"formatDate": {
"description": "Format date to readable string",
"status": "implemented",
"signature": "(date: Date | string, format?: string) => string",
"dependencies": [],
"usedBy": ["/components/PostCard.jsx", "/components/CommentCard.jsx"]
},
"slugify": {
"description": "Convert string to URL-safe slug",
"status": "placeholder",
"signature": "(text: string) => string",
"plannedImplementation": "Remove special chars, lowercase, replace spaces with hyphens",
"dependencies": [],
"usedBy": ["/pages/blog/[slug].jsx"]
}
}
},
"/utils/validation.js": {
"purpose": "Input validation utilities",
"exports": {
"validateEmail": {
"description": "RFC-compliant email validation",
"status": "placeholder",
"signature": "(email: string) => boolean",
"plannedImplementation": "Use RFC 5322 regex pattern",
"dependencies": [],
"usedBy": []
}
}
}
},
"temporaryFiles": [
"/scripts/one-time-migration.js"
],
"deprecatedFunctions": {
"/utils/old.js#oldFunction": "Use /utils/new.js#newFunction instead"
}
}
Adding to Your Command Files
Just add these two rules to your AI instructions:
# In CLAUDE.md (or any command file)
BEFORE creating any new file or function:
1. Read APPSTRUCTURE.json
2. Search for existing functionality that can be reused
3. Check deprecatedFunctions to avoid old patterns
AFTER creating new code:
1. Update APPSTRUCTURE.json with:
- New files and their purpose
- New functions and their relationships
- Mark temporary files in temporaryFiles array
- Update usedBy arrays for dependencies
Real Example: AI Checking Before Creating
Hereβs what happens when AI follows these rules:
# AI is asked to create a date formatting function
# Step 1: AI reads APPSTRUCTURE.json
# Step 2: AI finds formatDate already exists in /utils/string.js
# Step 3: AI imports existing function instead of creating new one
import { formatDate } from '@/utils/string';
# Step 4: AI updates APPSTRUCTURE.json to add new usage:
# Updates usedBy array for formatDate to include new component
The Results
In just one week of using APPSTRUCTURE.json:
- 70% reduction in duplicate functions
- Temporary files actually get deleted (marked for cleanup)
- Dependencies are trackable (no more orphaned code)
- Refactoring becomes possible (you know what uses what)
No libraries. No complex tooling. Just one JSON file and two prompt rules.
Command File Templates
Here are the complete command file templates for each AI tool. Copy the one you need and customize for your project:
Claude (CLAUDE.md or .claude.md)
# Claude Development Rules
## Before Writing ANY Code
1. Read APPSTRUCTURE.json to understand existing code
2. Check for placeholders that need implementation
3. Never create duplicate functions
## Package Management
1. Verify package existence: npm view [package-name]
2. Check weekly downloads (must be >1000)
3. Prefer well-known packages over obscure ones
4. For simple functions (<10 lines), write inline instead
## Code Quality Rules
1. Start simple - function before class
2. No console.log in production code
3. Self-contained components only
4. 90% test coverage minimum
## Security Requirements
BANNED: eval(), innerHTML with variables, exec with strings
REQUIRED: Parameterized queries, input validation, output sanitization
## After Writing Code
1. Update APPSTRUCTURE.json with new functions/files
2. Mark temporary files in temporaryFiles array
3. Update dependency relationships
Cursor (.cursorrules)
You are an expert developer. Follow these rules EXACTLY:
BEFORE creating any code:
β Read APPSTRUCTURE.json first
β Check if function already exists
β Add placeholders before implementing
Package rules:
β Verify with npm view before suggesting
β Check download count (>1000/week required)
β No packages for simple utilities
Code standards:
β Start simple, refactor if needed
β No console.log in production
β Write tests for everything (90% coverage)
β Use existing patterns from codebase
Security:
β Never use eval() or innerHTML with user data
β No string concatenation in SQL
β Always validate inputs
β Use parameterized queries
AFTER writing code:
β Update APPSTRUCTURE.json
β Run tests before marking complete
Windsurf (.windsurf)
rules:
pre_code:
- read: APPSTRUCTURE.json
- check: existing_implementations
- plan: add_placeholders_first
packages:
verify_exists: true
min_weekly_downloads: 1000
prefer_native: true
code_style:
start_simple: true
no_console_logs: true
test_coverage: 90
security:
banned:
- eval()
- innerHTML
- exec_with_variables
required:
- parameterized_queries
- input_validation
post_code:
- update: APPSTRUCTURE.json
- run: tests
- check: lint
GitHub Copilot (.github/copilot-instructions.md)
# Copilot Instructions
## Architecture Awareness
- Always check APPSTRUCTURE.json before creating new functions
- Use existing utilities from /utils instead of recreating
- Add placeholders to APPSTRUCTURE.json for planned functions
## Package Discipline
- Verify packages exist: `npm view [package]`
- Require >1000 weekly downloads
- Write simple functions inline instead of adding dependencies
## Code Standards
- Simplicity first: functions over classes
- No console.log statements
- 90% test coverage minimum
- Follow existing patterns in codebase
## Security Musts
- No eval(), innerHTML, or string-based exec()
- Use parameterized queries for all database operations
- Validate all user inputs
- Sanitize all outputs
## Workflow
1. Check APPSTRUCTURE.json
2. Write/update code
3. Update APPSTRUCTURE.json
4. Write comprehensive tests
Aider (.aider.conf.yml)
# Aider Configuration
rules:
- Always consult APPSTRUCTURE.json before coding
- Verify npm packages exist before suggesting
- Start with simplest implementation
- No console.log in production
- 90% test coverage required
- Update APPSTRUCTURE.json after changes
banned_patterns:
- eval()
- innerHTML with variables
- String concatenation in SQL
- Console.log in production
- Tests that test nothing
required_patterns:
- Parameterized queries
- Input validation
- Error handling
- Comprehensive tests
- APPSTRUCTURE.json updates
The Universal APPSTRUCTURE.json
Create this file in your project root:
{
"version": "1.0",
"lastUpdated": "2025-01-04",
"modules": {},
"temporaryFiles": [],
"deprecatedFunctions": {}
}
Your Next Steps
- Copy the templates from this article
- Add command files to your project (5 minutes)
- Watch the magic happen (ongoing)
- Share your success stories (and horror stories)
Resources for the Brave
- π Full AI Best Practices Guide - The complete encyclopedia
- πΊοΈ Function Mapping Strategy - Never duplicate again
- π AI Code Review Process - Catch issues before they catch you
- π§ͺ Unit Testing for AI Code - Because 41% more bugs is not acceptable
The Final Truth
AI coding assistants are like fire: incredibly powerful, transformative tools that can either cook your dinner or burn down your house. The difference? Proper containment.
Your command files are that containment. Use them wisely, update them regularly, and never, ever trust an AI that says βthis package definitely exists.β
Now if youβll excuse me, I need to go delete 46 more formatDate functions.
Found this helpful? Share it with another developer who's drowning in duplicate functions. Together, we can make AI coding assistants work for us, not against us.
Have your own AI horror stories? I'd love to hear them. Find me on Twitter/X at @xswarmai or drop by the xSwarm GitHub repo.