// Behind The Music
The Short Version
bitwize music is an experiment in human + AI music collaboration. Not “AI-generated music” where you push a button and get a song. More like having a really weird, really helpful bandmate who never sleeps and has read everything on the internet.
The human brings the vision, the judgment, and the final call. The AI brings tireless iteration, research at scale, and an obsessive attention to detail. Together, I make albums that neither of us could make alone.
Who Am I
I’m bitwize—a hacker who loves music and experimenting with new technology. When AI music generation tools started getting good, I couldn’t resist seeing what was possible.
I’m also excited to finally make the music I wish already existed—songs about weird niche tech interests of mine, like how the Debian operating system came to be, the drama of the Linux kernel mailing list, or the underground warez scene. Stories that deserve to be told, but nobody’s made albums about … until now.
The Workflow
This isn’t “open ChatGPT and type some prompts.” The production system behind bitwize music includes over 250,000 lines of structured documentation—custom instructions, templates, workflows, research files, and track specifications that guide every phase of album creation.
Here’s how it actually works:
Concept
Every album starts with a question. What’s the story? What’s the angle? For documentary albums like The Wizard or The Scene, that means identifying a real story worth telling—something with depth, drama, and a hook that’ll make someone want to listen.
Research
This is where documentary albums get serious. Court documents, DOJ press releases, contemporary newspaper accounts, academic sources. Not Wikipedia summaries—primary sources wherever possible.
Here’s what research actually looks like for The Wizard (about Thomas Edison’s darker side):
| Source Type | Examples |
|---|---|
| Official Archives | Thomas A. Edison Papers at Rutgers (150,000+ documents) |
| Academic Books | Mark Essig’s Edison and the Electric Chair |
| Court Records | In re Kemmler, 136 U.S. 436 (1890) |
| Contemporary Newspapers | New York Sun (August 25, 1889 exposé), Brooklyn Daily Eagle |
That album required cross-referencing 11 separate research files covering everything from Edison’s patent litigation to primary sources about Topsy the elephant.
Writing
Lyrics get written, revised, and polished. The AI helps with iteration—trying different rhyme schemes, checking for prosody issues, making sure verse 2 actually develops the story instead of just repeating verse 1 with different words.
But the creative direction stays human. What’s the emotional arc? What’s the hook? What makes this track matter?
Generation
This is where AI music generation comes in. I use tools that turn lyrics and style descriptions into actual audio. It’s not one-and-done—it’s iterative. Generate, listen, adjust, try again. Sometimes it takes 10, 20 or more generations to land the right sound for a single track.
Verify
For documentary albums, this is non-negotiable. Every factual claim gets traced back to sources. The research sections on the album pages aren’t decoration—they’re the receipts.
Release
Artwork, mastering, distribution. The boring but necessary stuff that turns a collection of tracks into an actual album people can find and listen to.
Tools I Use
The bitwize music stack:
AI & Generation
- Claude Code — AI collaborator for writing, research, iteration, and documentation.
- Suno — AI music generation. Turns lyrics and style descriptions into actual songs.
- ChatGPT — Album artwork generation with DALL-E.
Audio Processing
- Python — Custom mastering scripts for loudness normalization and EQ.
- pyloudnorm — ITU-R BS.1770-4 loudness measurement for streaming targets.
- Matchering — Reference-based mastering to match the sound of professional tracks.
- FFmpeg — Promo video generation, audio extraction, format conversion.
- SciPy — Signal processing for EQ and filtering.
- Librosa — Audio analysis for smart segment selection in promo videos.
Research & Automation
- Playwright — Automated browser for document hunting from public archives.
Website & Infrastructure
- GitHub — Version control for everything: lyrics, research, website, documentation. Easy reverts, change tracking, and collaboration history.
- Hugo — Static site generator for bitwizemusic.com.
- Cloudflare Pages — Hosting and deployment.
Distribution
- DistroKid — Distribution to Spotify, Apple Music, and everywhere else.
- SoundCloud — Primary streaming and sharing platform.
How It All Fits Together
Claude Code is the orchestrator. It doesn’t just help with writing—it runs the entire production pipeline through custom workflow files, specialized skills, and automation hooks. Research, lyric iteration, mastering scripts, promo video generation, website deployment—all triggered and coordinated through Claude Code.
What’s automated:
- Research gathering and source verification
- Initial lyric drafts and technical quality checks (rhyme, prosody, pronunciation)
- Running Python mastering scripts
- Generating promo videos with FFmpeg
- Website builds and deployment
- Version control and documentation
What’s still manual:
- Lyric iteration and refinement (story flow, emotional arc, creative direction)
- Suno generation (pasting prompts, listening, downloading keepers)
- Quality control listening and final approval
- SoundCloud and DistroKid uploads
- Album artwork generation via ChatGPT
The goal is human judgment where it matters (creative decisions, quality control) and automation everywhere else.
Documentary Rigor
Some of these albums tell real stories about real events. That comes with responsibility.
The Source Hierarchy
Not all sources are equal. I follow a strict hierarchy:
- Court documents — Indictments, rulings, transcripts (highest authority)
- Government releases — DOJ press releases, agency statements
- Investigative journalism — Long-form reporting from reputable outlets
- News coverage — Contemporary newspaper accounts
- Wikipedia — Context only, never for facts
Myth Busting
Sometimes research reveals that popular narratives are wrong.
The Wizard addresses a famous myth: that Thomas Edison personally electrocuted Topsy the elephant as anti-AC propaganda.
The myth: Edison electrocuted Topsy to scare people away from AC current.
What I found:
- Edison was never at Luna Park
- The War of Currents ended in 1892; Topsy died in 1903 (10 years later)
- Zero mentions of Topsy in Edison’s correspondence at Rutgers
- Luna Park owners Thompson & Dundy ordered the execution, not Edison
The album addresses this directly—Topsy’s death was the culmination of Edison’s legacy, not his action. That’s a more interesting (and accurate) story than the myth.
Track-by-Track Verification
Every documentary track gets a verification table. Here’s a real example from “December Fifth” on The Wizard:
| Lyric Claim | Verified Fact | Source |
|---|---|---|
| December 5, 1888 | Date of large animal demonstration | Edison and the Electric Chair |
| Edison attended | “Edison personally attended and addressed the committee” | Multiple sources |
| 4 calves, 1 horse killed | Documented count | Edison and the Electric Chair |
| 770 volts | Voltage used on first calf | Executed Today |
If a claim can’t be verified, it gets flagged as “creative license” and documented as such.
What Gets Documented as Creative License
I’m explicit about what’s dramatization:
| Element | Type | Notes |
|---|---|---|
| Internal thoughts of Edison | Dramatization | No documented internal monologue |
| Topsy’s perspective | Artistic license | Anthropomorphization for narrative |
| Emotional framing | Interpretation | “Accusatory narrator” is artistic choice |
What is not creative license: all dates, names, numbers, court rulings, and attributed quotes.
The Pronunciation Challenge
AI music generation has a dirty secret: it can’t read.
When Suno sees “live,” it doesn’t know if you mean “live performance” (LYVE) or “live your life” (LIV). When it sees “read,” it guesses—and guesses wrong half the time.
Real Fixes from Real Tracks
From “December Fifth” on The Wizard:
| Original | Problem | Fixed Version |
|---|---|---|
| Medico-Legal Society | Technical term | Med-ih-koh Lee-gul Society |
| Kennelly | Unusual name | Ken-uh-lee |
| electricity | Common mispronunciation | ee-lek-triss-i-tee |
From Deb + Ian:
| Original | Problem | Fixed Version |
|---|---|---|
| Debian | Tech term | Deb-ee-in |
The Homograph Problem
These words have two pronunciations. Every one requires a decision:
| Word | Could Be | Or | The Fix |
|---|---|---|---|
| live | LYVE (perform) | LIV (exist) | Rewrite or add context |
| wind | WINED (breeze) | WIND (coil) | “the breeze” or “wound up” |
| tear | TEER (cry) | TARE (rip) | “crying” or “ripped” |
| bass | BASE (fish) | BASS (guitar) | “low end” or “the fish” |
| lead | LEED (guide) | LED (metal) | “leading” or “leaden” |
| read | REED (present) | RED (past) | Context or rewrite |
I scan every lyric for these before generation. It’s tedious. It matters.
Why This Matters
A mispronounced word breaks the spell. When the AI says “LEED” instead of “LED” in a song about metal, it sounds wrong to every listener—even if they can’t articulate why. I catch these before generation, not after.
Lyric Craft
Good lyrics aren’t just rhymes. Every track goes through quality checks.
Prosody
Stressed syllables need to land on strong beats. When they don’t, lines feel awkward even if the words are fine.
Bad prosody (stress on wrong beat):
“The MACH-ine is RUN-ning NOW”
Good prosody (natural stress pattern):
“The ma-CHINE is run-NING now”
The AI checks every line for this before I generate.
Rhyme Quality
Not all rhymes are equal:
| Type | Example | Quality |
|---|---|---|
| Perfect rhyme | “gate / late” | Strong |
| Slant rhyme | “gate / great” | Acceptable |
| Self-rhyme | “gate / gate” | Never |
| Repeated end word | “running / running” | Never |
Lazy patterns get caught and fixed before generation.
Verse Development
V2 can’t just be V1 with different words. It needs to develop the story:
| V1 | V2 |
|---|---|
| Introduces situation | Raises stakes |
| Sets the scene | Shows consequences |
| Presents character | Reveals depth |
If verse 2 just rewords verse 1, it gets rewritten.
Generation Iteration
Getting a track right isn’t one attempt. It’s a process.
What I’m Listening For
- Vocal delivery — Does the phrasing feel natural?
- Pronunciation — Did the phonetic fixes work?
- Structure — Are all sections (verse, chorus, bridge) present?
- Mood — Does it match the intended emotion?
- Audio quality — No weird artifacts or glitches?
Iteration Reality
Some tracks land on attempt 3. Some take 20+. The generation log tracks every attempt:
| # | Date | Model | Result | Notes | Rating |
|---|---|---|---|---|---|
| 1 | 2025-12-03 | V5 | [Listen] | First attempt, too fast | — |
| 2 | 2025-12-03 | V5 | [Listen] | Better pacing, wrong mood | — |
| 3 | 2025-12-03 | V5 | [Listen] | Keeper | ✓ |
I don’t hide the iteration. It’s part of the process.
When Generation Isn’t Enough
Sometimes the AI nails the vibe but something’s off—the backing vocals overpower the lead, the bass is too prominent, an instrument clashes with the vocal melody. That’s when I open Suno Studio and extract stems.
Stem separation lets me isolate:
- Lead vocals — Adjust levels, add effects, fix mix issues
- Backing vocals — Pull them back or push them forward
- Instruments — Tweak individual elements that don’t sit right
- Bass/drums — Rebalance the low end
It’s not always needed, but when a track is 90% there and regenerating would lose what works, stem editing saves it.
Mastering for Streaming
Raw Suno output isn’t ready for streaming platforms. Every track gets mastered.
Target Standards
| Platform | LUFS Target | True Peak |
|---|---|---|
| Spotify | -14 LUFS | -1.0 dBTP |
| Apple Music | -16 LUFS | -1.0 dBTP |
| YouTube | -14 LUFS | -1.0 dBTP |
Common Fixes
| Issue | Problem | Solution |
|---|---|---|
| Too quiet | Won’t compete on playlists | Loudness normalization |
| Harsh high-mids | Ear fatigue (2-6kHz) | Surgical EQ cuts |
| Weak low end | Thin on speakers | Bass enhancement |
| Dynamic range | Too compressed or too dynamic | Multiband compression |
Album Consistency
All tracks on an album should be within 1 dB LUFS of each other. A quiet track after a loud one feels wrong, even if each sounds fine in isolation.
Genre Experimentation
bitwize music isn’t one sound. The project spans:
- Nerdcore — Tech nostalgia, hacker culture, internet history
- Dark Industrial — Heavier documentary work
- Indie Folk — Quieter, introspective storytelling
- Country/Americana — Road songs and heartbreak
- Ska Punk — Horns, energy, chaos
- Synth-Pop — 80s-influenced electronic
Different stories need different sounds. A documentary about Thomas Edison’s animal experiments doesn’t sound like a Christmas ska album. That’s the point.
Transparency
I’m not hiding the process. The method is part of the art.
You can see the research. You can see the sources. You can see what’s documented and what’s interpretation. The albums stand on their own as music, but the documentation is there for anyone who wants to dig deeper.
This is what AI collaboration looks like when you do it with intention—not as a gimmick, but as a genuine creative partnership.
Sample Templates
Here are the actual template structures used to build tracks and albums.
Track Template
Every track starts from this scaffold:
# [Track Title]
## Track Details
| Attribute | Detail |
|-----------|--------|
| **Track #** | XX |
| **Title** | [Track Title] |
| **Album** | [Album Name](../README.md) |
| **Status** | Not Started |
| **Suno Link** | — |
| **Explicit** | Yes / No |
| **POV** | [Character/Perspective] |
| **Sources Verified** | ❌ Pending |
## Concept
[Describe the track's narrative, themes, and purpose]
## Musical Direction
- **Tempo**: [BPM estimate]
- **Feel**: [Energy level, groove type]
- **Instrumentation**: [Key instruments/sounds]
## Suno Inputs
### Style Box
[genre], [tempo/BPM], [mood], [vocal description], [instruments]
### Lyrics Box
[Verse 1]
[Lyrics here...]
[Chorus]
[Lyrics here...]
## Pronunciation Notes
| Word/Phrase | Pronunciation | Reason |
|-------------|---------------|--------|
| — | — | — |
## Generation Log
| # | Date | Model | Result | Notes | Rating |
|---|------|-------|--------|-------|--------|
| — | — | — | — | — | — |
Album Template
Album structure for concept development:
# [Album Title]
## Album Details
| Attribute | Detail |
|-----------|--------|
| **Artist** | [Artist Name] |
| **Genre** | [Genre] / [Subgenre] |
| **Tracks** | [Number] |
| **Status** | Concept |
| **Explicit** | Yes / No |
## Concept
[Detailed description of album's concept and themes]
## Sonic Palette
- **Beats**: [Production style]
- **Samples**: [Sample sources if applicable]
- **Vocals**: [Vocal style and delivery]
- **Mood**: [Overall emotional tone]
## Tracklist
| # | Title | POV | Concept | Status |
|---|-------|-----|---------|--------|
| 01 | [Track Name] | [POV] | [Brief concept] | Not Started |
| 02 | [Track Name] | [POV] | [Brief concept] | Not Started |
## Production Notes
**Style Prompt Base**:
[Base style prompt for all tracks]
## Album Art
### ChatGPT Image Prompt
[Visual description for DALL-E generation]
Documentary Standards Template
For albums based on real events:
## Documentary Standards
### Album Classification
| Attribute | Selection |
|-----------|-----------|
| **Album Type** | ☐ True Crime / ☐ Dramatized / ☐ Inspired By |
| **Real People Featured** | ☐ Yes / ☐ No |
| **Legal Sensitivity** | ☐ High / ☐ Medium / ☐ Low |
### Real People Depicted
| Person | Role | Depicted How | Sensitivity |
|--------|------|--------------|-------------|
| [Name] | [Role] | [Narrator describes / Quotes attributed] | [H/M/L] |
### Legal Safeguards
- [ ] No defamation: Claims are documented facts
- [ ] No fabricated statements: Real words are sourced
- [ ] Fair use: Album is commentary on public interest
- [ ] Narrator voice: Storyteller, not impersonation
### Source Verification Status
| Track | Sources Captured | Human Verified |
|-------|------------------|----------------|
| 01 - [Title] | ☐ | ☐ |
| 02 - [Title] | ☐ | ☐ |
Track Verification Template
For tracks with factual claims:
## Quotes & Attribution
| Lyric Line | Type | Attribution | Source |
|------------|------|-------------|--------|
| "[quote/claim]" | Verbatim / Paraphrase | How framed | Source doc |
## Artistic Liberties Taken
| Element | Liberty Taken | Justification |
|---------|---------------|---------------|
| [e.g., dialogue] | [what was changed] | [why - flow, etc.] |
## Legal Review
- [ ] No impersonation
- [ ] Documented claims only
- [ ] Fair comment: Opinion vs. fact distinguished
- [ ] No fabricated quotes
- [ ] Public interest subject matter
Phonetic Review Checklist
Pre-generation pronunciation scan:
## Phonetic Review Checklist
- [ ] **Proper nouns scanned**: Names, places, brands
- [ ] **Foreign names**: Phonetic spelling added
- [ ] **Homographs checked**: live, lead, read, wind, tear
- [ ] **Acronyms**: Spelled out (F-B-I not FBI)
- [ ] **Numbers**: Year formats ('93 not ninety-three)
- [ ] **Tech terms**: Linux → Lin-ucks, SQL → sequel
**Proper nouns in this track:**
| Word | Current | Phonetic | Fixed? |
|------|---------|----------|--------|
| — | — | — | — |