Manual reporting is for spreadsheets. We run real trading systems. So we built one that reports and audits itself — and you can too.
the pain: stat updates felt like a second job
You know the drill.
We had PnL results, win/loss rates, risk profiles… but to actually report them, we had to:
- Manually check which trades executed
- Validate outcomes against signal timestamps
- Calculate PnL based on screenshots or partial fills
- Fix duplicates in the DB
- Then update monthly reports manually
It felt like we were fighting the system we built. Not running it.
So we overhauled our backend logic — top to bottom. No more second guessing, no more cross-checking spreadsheets, and no more delay in publishing stats.
the upgrade: smart logging and self-validating PnL tracking
We’ve always been deployed on Railway, and this update didn’t move the system — it made it bulletproof.
We rebuilt our core architecture around reliability, auditability, and actual intelligence.
what changed (and why it matters)
1. centralized logic with safe_update_pnl()
We created a single PostgreSQL function that handles all trade updates:
CREATE OR REPLACE FUNCTION safe_update_pnl(...) RETURNS void AS $$
BEGIN
-- check for existing entry
-- insert or update accordingly
-- log source and changes
END;
$$ LANGUAGE plpgsql;
Why it matters:
- Ensures idempotency — you can’t mess up by reprocessing data
- Enforces uniqueness without fragile app logic
- Captures source of truth: whether from Binance API, fallback, or manual
2. smarter schema with real constraints
We upgraded our PostgreSQL schema to include:
- Trade ID + Symbol as a unique constraint
- Clean indexing for fast lookups
- Fallback fields for redundancy and disaster tolerance
No more corrupt rows. No more guessing where the PnL came from.
3. verification script to prove every deploy
We built a deploy checker:
python verify_production_deployment.py
It verifies:
- The presence of
safe_update_pnl()
- The active uniqueness constraint
- That no duplicates were inserted in the last hour
- That sample trades are being logged and calculated correctly
Now every deploy is provable — with output you can read and trust.
4. logs that speak human
Our logs used to be tech garbage.
Now they say:
Updated Trade ID 4 (BNBUSDT) with binance_api PnL: $20.67
EXISTING entry updated for Trade ID 5
Using fallback: mobile_app_fallback for Trade ID 6
Now your logs are your control panel — no extra tooling required.
what this fixed
- Reporting is now automated
- Every trade is traceable to its signal and data source
- PnL updates are safe, smart, and redundant
- Our team no longer wastes hours manually validating data
And it’s all running exactly where it always has — on Railway, now smarter.
hidden upgrades most systems skip
Here’s what’s quietly powerful under the hood:
scoped uniqueness
UNIQUE (trade_id, symbol)
This prevents collisions from reused trade IDs across pairs — a real-world edge case many miss.
source-level trust
By tagging every PnL update with a source
(binance_api
, fallback
, manual
), we can now:
- Track confidence levels
- Segment analysis by data reliability
- Audit data origin per trade
real CI deploy verification
Most teams deploy and assume it worked.
We wrote a verifier that proves it worked — and logs the receipts. Add this to your CI and you’ve got trust baked in.
latency-aware architecture
By comparing signal timestamps to PnL update timestamps, we can analyze execution lag per trade, per symbol.
That’s optimization insight most trading teams never even try to gather.
snapshot logging
With this architecture, you can store PnL snapshots at regular intervals — building real-time floating PnL analysis into your system, without writing any new backend code.
build it yourself — full guide
Want to build this from scratch? Here’s exactly how.
step 1: set up your db schema
CREATE TABLE pnl_tracking (
trade_id INT NOT NULL,
symbol TEXT NOT NULL,
pnl NUMERIC,
source TEXT,
updated_at TIMESTAMP DEFAULT NOW(),
UNIQUE(trade_id, symbol)
);
CREATE TABLE pnl_audit_log (
id SERIAL PRIMARY KEY,
trade_id INT,
symbol TEXT,
pnl NUMERIC,
source TEXT,
event_time TIMESTAMP DEFAULT NOW(),
payload JSONB
);
step 2: write a safe update function
CREATE OR REPLACE FUNCTION safe_update_pnl(...) RETURNS void AS $$
-- Checks if row exists
-- Updates if it does, inserts if not
-- Logs to audit table
$$ LANGUAGE plpgsql;
Nothing should ever touch pnl_tracking
directly again.
step 3: create a deploy verifier
import psycopg2
def check_fn_exists(cursor):
cursor.execute("SELECT proname FROM pg_proc WHERE proname = 'safe_update_pnl';")
assert cursor.fetchone(), "safe_update_pnl() not found"
def check_constraints(cursor):
cursor.execute("SELECT conname FROM pg_constraint WHERE conname LIKE '%pnl_tracking%';")
assert cursor.fetchone(), "Unique constraint missing"
Run this after every deployment. Log the output.
step 4: upgrade your logging system
Skip the print("OK")
garbage.
Log like this:
Updated Trade ID 104 (ETHUSDT) with $15.89 from binance_api
Fallback triggered for AVAXUSDT — using mobile_app_fallback
Duplicate prevented by DB for 1000WHYUSDT
Make your logs readable for humans, not just your debugger.
step 5: use intelligent fallbacks
If the Binance API fails, fallback sources kick in:
- From mobile app
- From estimate based on SL/TP
- From delayed polling
You don’t go blind. You stay operational.
step 6: build real stats from this
Once you’ve got the system running:
- Group trades by symbol, strategy, or signal
- Calculate win rates, drawdowns, risk/reward
- Detect slippage from entry to exit
- Flag anomalies where fallback was used too often
This becomes your internal control tower.
final thoughts
We didn’t make these changes to sound smart. We did them to stop the bleeding — wasted hours, inconsistent stats, broken reporting.
Now the system works for us, not the other way around.
And if you want to build a serious, transparent, self-correcting trading service — this is where you start.
It’s not rocket science. It’s just solid engineering.
— PremiumCoinSignals Engineering