Skip to content

devteapot/proj

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

proj

Research project: neural app runtimes and projection-based applications.

A projection-based application is the bet that an application's state, behavior, and domain semantics can live primarily in a neural model, while the user interface is only a real-time projection of that state.

This repo is currently research-first, not an implementation of a solved architecture. The strongest finding so far is negative and useful: no mature public system appears to fully satisfy the strong version of this idea.

Current Status

The audited conclusion is:

Projection-based apps are a plausible new software category, but the strong architecture does not exist yet as a mature system. Existing systems are either generative UI, agent protocols, codegen apps, world-model demos, or hybrids that externalize state.

That distinction matters. A prototype that stores state in JSON, a database, files, or prompt-visible context is useful, but it is not proof that “the model is the app.” It is a hybrid.

Working Definition

Strong projection means:

  1. Application identity/state lives in model internals — weights, activations, recurrent state, KV cache, or another neural latent state are the authoritative runtime state.
  2. UI is a projection — text, HTML, voice, structured UI, or pixels are views of that runtime state, not the application itself.
  3. No durable symbolic app layer is authoritative — generated HTML, JSON schemas, databases, event buses, and tool calls may exist as adapters, but if they hold canonical state or logic, the system is hybrid rather than strong projection.

Taxonomy

Category Canonical state lives in What to call it Examples / analogues
Strong projection Model hidden state / latents / KV / weights Neural app runtime Not mature today; research target
Weak projection Conversation transcript / prompt context Hallucinated app loop Mirage, Websim-like patterns
Hybrid projection External JSON/db/files plus model-driven mutation/rendering Agentic projection runtime Practical p0 candidate
Generative UI UI specs / HTML generated by model, app state elsewhere UI generation A2UI, AG-UI integrations, Thesys, Gemini Generative UI
Codegen app Generated source code and conventional runtime Vibe/app builder Lovable, Bolt, v0, Cursor-style flows
World model runtime Learned latent dynamics, usually visual/environmental Neural simulator GameNGen, Oasis, Genie

Reading Order

  1. references/consolidated-research.md — canonical research synthesis.
  2. references/slop-fit.md — SLOP fit analysis: projection protocol, not neural runtime.
  3. experiments/codex-research-opus.md — sharper critical survey; identifies Mirage/world-model analogues.
  4. experiments/codex-research-gpt55.md — broader survey of adjacent academic/protocol/startup work.
  5. experiments/ARCHITECTURE.md — audited architecture plan split into weak, hybrid, and strong prototype tracks.
  6. references/source-status.md — citation hygiene and source-confidence notes.

Prototype Strategy

Do not build a single prototype and call it “the neural app runtime.” Build three explicitly labeled tracks:

Track A — Weak Projection

A Mirage-style loop: user action + transcript → model emits the next UI. No app-specific code. This tests UX malleability and state drift, not hidden-state persistence.

Track B — Hybrid Projection

Explicit JSON/db/file state with model-driven state mutation and rendering. This is useful engineering scaffolding and likely the first practical demo, but it is not the strong thesis.

Track C — Strong Projection

Open-model experiment where hidden/KV/activation state is inspected, snapshotted, restored, probed, and tested as the candidate application state. This is the only track that can support the full “model is the app” claim.

Core Open Problems

  • Persistent neural state with identity across sessions.
  • Deterministic or near-deterministic projection from neural state to UI.
  • Independent validation without asking the model to grade itself.
  • Debuggability without falling back to symbolic state as the real app.
  • Multi-user, transactional, permissioned state semantics.
  • Latency acceptable for direct manipulation.

Relation to SLOP

SLOP fits this repo as a projection protocol, not as the neural runtime itself. See references/slop-fit.md.

  • For Track A, SLOP can render live semantic state into an ephemeral <slop-state> context tail, giving weak projection a cleaner implementation than transcript soup.
  • For Track B, SLOP is probably the best near-term state/action boundary: explicit state tree, contextual affordances, salience, snapshots, patches, and validation around invokes.
  • For Track C, SLOP can expose decoded/probed latent state for inspection, but the actual strong claim still requires model-internal state instrumentation beyond ordinary SLOP trees.

Structure

proj/
├── README.md
├── research-prompt*.txt              # original research prompts
├── references/
│   ├── README.md                     # index and summary
│   ├── consolidated-research.md      # canonical survey
│   ├── slop-fit.md                   # SLOP fit: projection protocol, not runtime
│   └── source-status.md              # citation/source audit
└── experiments/
    ├── ARCHITECTURE.md               # prototype tracks after audit
    ├── codex-research-gpt55.md       # broad research run
    └── codex-research-opus.md        # critical research run

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages