Can a volunteer use it quickly?
Try Volunteer Mode: pick a scenario and inspect the label, priority, latency, scene summary, and conservative next action.
Open Volunteer Mode →During floods, wildfires, earthquakes, and hurricanes, volunteers receive messy text reports and photos when connectivity may be unreliable. Edge-Triage uses Gemma 4 on edge hardware to turn each report into a triage label, priority, and conservative next action while keeping humans in control.
Edge-Triage has something for field, ML, Gemma, agents, and responsible-AI reviewers. Use this guide to jump straight to the proof that matters for your judging lens.
Try Volunteer Mode: pick a scenario and inspect the label, priority, latency, scene summary, and conservative next action.
Open Volunteer Mode →Check the two validated profiles, full-50 run IDs, latency budget, and why the frontier is more honest than a raw ledger maximum.
Open Metrics source →Open Optimization Mode to see candidate profiles, keep/discard decisions, ablations, and the self-learning loop behind the final profiles.
Open Optimization Mode →Read Evidence for local multimodal Gemma 4 value, privacy, human control boundaries, limitations, and reproducible proof.
Open Evidence →Volunteer Mode shows field-facing triage. Optimization Mode shows the frontier evidence behind the submitted profile.
Curated offline demo is intentionally simple: choose one of the fixed public-safe scenarios below and the right-hand card updates with the related image and analysis. Switch to Live Gemma preview only when you want to upload a new image to the guarded API.
Choose one of the field examples to render the triage result.
Select a report to view conservative routing guidance.
Select a report to see what the model understood from the scene.
Edge-Triage is designed for the messy middle of a crisis: reports arrive fast, connectivity is uncertain, and responders need conservative routing support they can audit later. The demo evidence connects the product story to measured runs, public-safe scenarios, and reproducible documentation.
Gemma 4 is central because the same local multimodal model family can read short field notes plus image context, run through GGUF/llama.cpp-style edge packaging, and support both the volunteer workflow and the research evaluation loop.
The output is intentionally constrained to a label, priority, latency, confidence context, and conservative next action. It supports incident command and trained responders; it does not replace them.
The speed and accuracy profiles are tied to run IDs in results.tsv and summarized in docs/CURRENT_FRONTIER.md, keeping README, demo, video, and Kaggle writeup claims aligned.
Ambiguous reports still need responder judgment. Live Gemma preview is token-gated and rate-limited, so the curated offline demo remains the reliable public fallback; no medical or incident-command authority is delegated to the model.