Amenti LabsAmenti Labs
active investigation

QRNG Influence on LLM Outputs

Testing whether conscious intention can bias quantum random number generators used as entropy sources for large language model sampling.

This page is a project summary for work in progress rather than a finished public report. learn more / contact

contributors: Amenti Labs

Overview

Can human intention shift the outputs of a large language model when quantum randomness drives token selection? That is what this project tests.

LLM inference normally picks tokens using pseudo-random number generators. We built a vLLM plugin that swaps in any external entropy source: hardware QRNGs, OS randomness, or CPU timing jitter. If the PEAR Lab findings hold, directed intention could bias token-level output distributions through the quantum entropy path.

Research Questions

  • Does QRNG-seeded sampling produce statistically distinguishable outputs from PRNG-seeded sampling?
  • Can directed intention sessions shift token-level entropy metrics?
  • How do observed effect sizes compare to the PEAR Lab literature?
  • Does z-score signal amplification preserve or weaken any intention signal in the raw entropy?
  • How does entropy-dependent temperature scaling interact with intention effects at high- and low-certainty token positions?

How It Works

The system is a vLLM V1 LogitsProcessor plugin. It intercepts token selection and replaces the default sampler with an external-entropy pipeline. No vLLM source changes needed. The plugin registers through a Python entry point.

Sampling Pipeline

Let's break it down. For each token:

  1. Temperature computation. A temperature strategy reads the raw logit distribution and picks a value. A fixed strategy applies a constant. An entropy-dependent strategy (EDT) scales temperature based on Shannon entropy, lowering it when the model is confident and raising it when uncertain.

  2. Just-in-time entropy. The plugin pulls raw bytes from the configured source at the moment of need. No buffering or caching. For QRNG sources, this means a gRPC call per token. Three transport modes are supported: unary, server-streaming, and bidirectional-streaming.

  3. Signal amplification. Raw bytes (20,480 by default) are read as float64 samples and run through z-score mean statistics. The system takes the sample mean, normalizes by the standard error, and maps through the normal CDF to produce one uniform float in [0, 1]. Any micro-bias in the source gets concentrated into a measurable shift in this float.

  4. Token selection. Logits are scaled by temperature, filtered by top-k and top-p, converted to probabilities via softmax, and arranged into a CDF. The amplified float indexes into the CDF to pick a token.

  5. One-hot enforcement. The plugin forces all logits to negative infinity except the chosen token (set to zero), guaranteeing vLLM's downstream sampler respects the plugin's selection.

Entropy Sources

The plugin supports several source types through a registry:

  • Quantum (gRPC). Connects to any entropy server using the project's gRPC protocol. A circuit breaker tracks P99 latency and opens after consecutive failures.
  • System. Uses os.urandom(). Serves as the default fallback.
  • Timing noise. Harvests CPU timing jitter.
  • Pluggable. Third-party sources register via the qr_sampler.entropy_sources entry-point group.

A composition wrapper handles automatic fallback: if the QRNG fails, the system switches to a secondary source transparently.

Signal Amplification

This stage sits at the center of the experiment. QRNG output is uniform random bytes. Any intention-induced bias would be tiny, on the order of PEAR Lab effects (z-scores of ~2-3 over millions of trials).

Here is how the amplifier works:

  1. Reads N raw bytes as float64 samples (default: 2,560 samples from 20,480 bytes)
  2. Computes the sample mean
  3. Divides by the standard error of the mean
  4. Maps the z-score through the normal CDF to produce a uniform float

With 2,560 samples per token, even a small bias accumulates and surfaces as a detectable displacement from 0.5.

Per-Token Logging

Every token selection produces a structured record: raw entropy statistics, amplified u-value, Shannon entropy of the logit distribution, computed temperature, selected token ID and rank, token probability, candidate count after filtering, and timing data. These records feed the statistical analysis.

Methodology

Experiments use a controlled A/B framework:

  • Baseline. No intention directive. The participant chats with the model normally while QRNG provides entropy.
  • Active. The participant focuses intention on influencing output toward a semantic target (a topic, tone, or concept).
  • Control. Same setup but with system entropy (os.urandom()) instead of QRNG, isolating effects to the quantum path.

All trials follow pre-registered protocols with corrections for multiple comparisons. Metrics include u-value distribution tested against uniformity (Kolmogorov-Smirnov), token-level Shannon entropy across conditions, semantic similarity to intention targets, and effect sizes compared to PEAR baselines.

Status

The sampling plugin is built and tested. Entropy serving and per-token logging are operational. We are finalizing the experimental protocol.

Sources

  • Jahn, R.G. & Dunne, B.J. "Margins of Reality: The Role of Consciousness in the Physical World." Harcourt Brace Jovanovich, 1987.
  • Nelson, R.D. et al. "Correlations of Continuous Random Data with Major World Events." Foundations of Physics Letters, 2002. https://doi.org/10.1023/A:1023981519179
  • Radin, D. "Testing Nonlocal Observation as a Source of Intuitive Knowledge." Explore, Vol. 4, No. 1, 2008. https://doi.org/10.1016/j.explore.2007.11.001