Pricebook ASMP

I built Pricebook ASMP to bring clarity to the chaotic, player-driven economy on asmp.cc. Shops are scattered across the world, prices shift constantly, and waystones form a fast-travel network. Pricebook lets players search for items, track price trends, and find the nearest waystone to any shop. It combines a Fabric mod that discovers in-world listings with a SQLite API that preserves every change as an append-only timeline.

Efficient Chunk Scanning: Avoiding Redundant Network Payloads

The first challenge was getting useful state out of Minecraft without drowning the network. ShopScanner watches chunk loads, parses sign text into structured entries, and keeps a snapshot cache keyed by chunk. Every scan is diffed against the last known snapshot; if nothing changed we short-circuit instead of re-sending identical payloads. That keeps the client responsive and the server's append-only log clean.

public void scanChunk(ClientWorld world, WorldChunk chunk) {
    if (world == null || chunk == null) {
        return;
    }

    ChunkPos pos = chunk.getPos();
    long key = pos.toLong();

    Set<ShopSignParser.ShopEntry> currentShops = collectShops(world, chunk);
    Set<BlockPos> currentWaystones = collectWaystones(world, chunk);

    ChunkSnapshot previous = lastKnownChunks.get(key);
    ChunkSnapshot current = new ChunkSnapshot(Set.copyOf(currentShops), Set.copyOf(currentWaystones));
    if (previous != null && previous.equals(current)) {
        LOGGER.trace("Chunk {} unchanged, skipping scan", pos);
        return;
    }

    lastKnownChunks.put(key, current);

    List<ShopSignParser.ShopEntry> sorted = currentShops.stream()
            .sorted(ENTRY_ORDER)
            .collect(Collectors.toList());
    List<BlockPos> waystones = currentWaystones.stream()
            .sorted(BLOCK_POS_ORDER)
            .collect(Collectors.toList());

    String dimension = Dimensions.canonical(world);
    boolean empty = sorted.isEmpty() && waystones.isEmpty();
    if (empty && !transport.shouldTransmitEmpty(dimension, pos)) {
        LOGGER.trace("Chunk {} is empty and not known to server, skipping", pos);
        return;
    }

    transport.sendScan(config.senderId, dimension, pos, sorted, waystones);
}

Because the scanner owns a Long2ObjectOpenHashMap of ChunkSnapshots, I can cheaply clear the cache when the player disconnects, or surgically forget chunks when a world unloads. The snapshot design later made it trivial to write offline tests that replay historical scans into the server.

Waystones, heuristics, and bootstrap handshakes

Shops on ASMP are often paired with waystones. I taught the mod to recognise bespoke patterns—a lodestone on a slab, oxidised copper columns, even mushroom plinths—so the server can display the closest fast-travel target for each listing. That rule-set lives in a static map of WaystonePatterns and gets memoised on load.

The HTTP layer is intentionally boring and resilient. On startup the client bootstraps by asking the server which chunks it already knows. That means I can safely skip empty scans unless they clear an existing chunk, and the server never forgets to prune removed shops.

public void sendScan(String senderId, String dimension, ChunkPos pos,
                     List<ShopSignParser.ShopEntry> shops, List<BlockPos> waystones) {
    ChunkCoordinate coordinate = new ChunkCoordinate(dimension, pos.x, pos.z);
    boolean empty = shops.isEmpty() && waystones.isEmpty();
    if (!empty) {
        serverKnownChunks.add(coordinate);
    }

    String payload = encodePayload(senderId, dimension, pos, shops, waystones);

    HttpRequest request = HttpRequest.newBuilder(scanEndpoint)
            .timeout(Duration.ofSeconds(REQUEST_TIMEOUT_SECONDS))
            .header("Content-Type", "application/json")
            .header("Accept", "application/json")
            .POST(HttpRequest.BodyPublishers.ofString(payload, StandardCharsets.UTF_8))
            .build();

    httpClient.sendAsync(request, HttpResponse.BodyHandlers.ofString(StandardCharsets.UTF_8))
            .whenComplete((response, throwable) -> handleSendResult(coordinate, empty, response, throwable));
}

public void bootstrap() {
    LOGGER.debug("Bootstrapping transport: fetching known chunks from server");
    fetchChunksPage();
}

The result is a client that survives flaky Wi-Fi, pauses cleanly when the player opens single player worlds, and resumes scanning with a consistent view of server state.

SQLite as an append-only time machine

I wanted the backend to feel like a flight recorder. SQLite plus better-sqlite3 gave me a zero-dependency bundle with transactions, WAL journaling, and easy backups. Opening the database aggressively enables WAL and recovers from stale -wal/-shm files—a practical nod to hosting on Fly.io, where process restarts are frequent.

const openDatabase = (dbFile) => {
  const resolved = path.resolve(dbFile);

  const attemptOpen = () => {
    const db = new Database(resolved);
    try {
      db.pragma('journal_mode = WAL');
      return db;
    } catch (pragmaError) {
      db.close();
      throw pragmaError;
    }
  };

  try {
    return attemptOpen();
  } catch (err) {
    if (err && err.code === 'SQLITE_IOERR_SHORT_READ') {
      removeWalFiles(resolved);
      return attemptOpen();
    }
    throw err;
  }
};

Every table is designed to store timeline segments: the current state where removed_at IS NULL, plus archived rows for everything that's been replaced. The schema spells it out directly in the column comments, which makes it easy to explain the flow or chase down odd edge cases later.

CREATE TABLE IF NOT EXISTS shops (
  id                 INTEGER PRIMARY KEY AUTOINCREMENT,
  dimension          TEXT    NOT NULL,
  pos_x              INTEGER NOT NULL,
  pos_y              INTEGER NOT NULL,
  pos_z              INTEGER NOT NULL,
  owner              TEXT    NOT NULL,
  item               TEXT    NOT NULL,
  price              REAL    NOT NULL,
  amount             INTEGER NOT NULL,
  action             TEXT    NOT NULL,
  first_seen_at      INTEGER NOT NULL,
  first_seen_scan_id INTEGER NOT NULL REFERENCES scans(id),
  last_seen_at       INTEGER NOT NULL,
  last_seen_scan_id  INTEGER NOT NULL REFERENCES scans(id),
  removed_at         INTEGER
);
CREATE INDEX IF NOT EXISTS idx_shops_item_action_removed
  ON shops (LOWER(item), action, removed_at, price) WHERE removed_at IS NULL;

The insertScanTx transaction writes a scan row, then hands the payload to the shop and waystone adapters. Each adapter is responsible for reconciling current rows and marking old ones as retired—no deletions, just historical breadcrumbs.

const insertScanTx = db.transaction((scanRow, shopRows, waystoneRows) => {
  const { lastInsertRowid: scanId } = insertScanStmt.run(
    scanRow.senderId,
    scanRow.dimension,
    scanRow.chunkX,
    scanRow.chunkZ,
    scanRow.scannedAt
  );

  const persistedScan = { ...scanRow, scanId };

  shops.reconcileScan(persistedScan, shopRows);
  waystones.reconcileScan(persistedScan, waystoneRows);

  return scanId;
});

Reconciling scans, one chunk at a time

Reconciling is all chunk-scoped. I group incoming shops by chunk, look up the current rows, and compare deterministic "state keys" (owner + item + price + action). If the state matches, I only extend last_seen_at. If the sign changed, I mark the existing row as removed and insert a new one. Missing shops simply get a removed_at timestamp, which makes rewinding trivial.

const syncChunkShops = (dimension, chunkX, chunkZ, shops, scanRow) => {
  const existingShops = listShopsForChunkStmt
    .all(dimension, chunkX, chunkX, chunkZ, chunkZ)
    .map(row => ({ ...row, posKey: keyForPosition(row.dimension, row.pos_x, row.pos_y, row.pos_z) }));

  const existingByPosition = new Map(existingShops.map(shop => [shop.posKey, shop]));
  const seenPositions = new Set();

  for (const shop of shops) {
    const posKey = keyForPosition(shop.dimension, shop.posX, shop.posY, shop.posZ);
    seenPositions.add(posKey);
    const existing = existingByPosition.get(posKey);

    if (existing) {
      const existingStateKey = stateKey(existing.owner, existing.item, existing.price, existing.amount, existing.action);
      const newStateKey = stateKey(shop.owner, shop.item, shop.price, shop.amount, shop.action);

      if (existingStateKey === newStateKey) {
        updateLastSeenStmt.run(scanRow.scannedAt, scanRow.scanId, existing.id);
      } else {
        markNotCurrentStmt.run(scanRow.scannedAt, existing.id);
        insertShopStmt.run(/* trimmed for brevity */);
      }
    } else {
      insertShopStmt.run(/* new shop */);
    }
  }

  for (const [posKey, shop] of existingByPosition) {
    if (!seenPositions.has(posKey)) {
      markNotCurrentStmt.run(scanRow.scannedAt, shop.id);
    }
  }
};

The same pattern powers waystones, with the added wrinkle that UI-driven packets can fill in the human-readable name/owner even if a chunk scan only sees the block structure.

Time travel for shoppers

Once the data model stores every revision, rewinding is straightforward. A recursive CTE builds a seven-day ladder of midnight timestamps; for each rung I aggregate the lowest price, remaining stock, and shop count among the rows that were "alive" at that moment. The API exposes the series at /v1/item/history, and the client renders it as a timeline with deltas.

WITH RECURSIVE dates(day_timestamp, is_today) AS (
  SELECT CAST(strftime('%s', 'now', 'start of day', '+1 day', '-1 second') AS INTEGER) * 1000, 1
  UNION ALL
  SELECT day_timestamp - 86400000, 0
  FROM dates
  WHERE day_timestamp > CAST(strftime('%s', 'now', 'start of day', '-6 days') AS INTEGER) * 1000
)
SELECT
  strftime('%Y-%m-%d', day_timestamp / 1000, 'unixepoch') AS date,
  MIN(s.price) AS lowestPrice,
  SUM(s.amount) AS stock,
  COUNT(DISTINCT s.id) AS shops
FROM dates d
LEFT JOIN shops s ON
  LOWER(s.item) = LOWER(@item)
  AND s.action = 'sell'
  AND s.first_seen_at <= d.day_timestamp
  AND (s.removed_at IS NULL OR s.removed_at > d.day_timestamp)
GROUP BY d.day_timestamp
ORDER BY d.day_timestamp DESC;

On the client side the /pricebook_history command uses PricebookRenderer to build a console table: it colours the lowest price in green, the weekly high in gold, and adds relative arrows next to stock and shop counts. It is ridiculously satisfying to watch prices drift and then jump back in time to verify when a market crashed.

Putting it all together

This build mixes client-heavy ergonomics (custom command palette, auto-complete against a remote catalog, waypoint overlays) with backend rigor (normalized schema, WAL-backed durability, migration scripts). The append-only log means I can take any chunk of scans, replay it into a fresh database, and regenerate the same API responses—handy for checking a suspicious trade or stress-testing a new feature. With Fly.io manifests and Dockerfiles in the tree, Pricebook ASMP stays easy to ship while showing off the chunk diffing, state reconciliation, and time-travel analytics that make the project tick.