Watch Claude Safely Inspect a Real Postgres Schema Without Write Risk
Giving an AI assistant database access is one of those ideas that sounds great right up until you imagine the dumbest possible outcome.
Giving an AI assistant database access is one of those ideas that sounds great right up until you imagine the dumbest possible outcome.
Best case, it saves you twenty minutes of context dumping.
Worst case, you let a model anywhere near production and it decides now is a great time to helpfully "fix" something.
That nervousness is not irrational.
Most people do not actually want to hand raw database power to Claude, Cursor, or anything else that occasionally gets a little too confident. They want the useful part: schema visibility, query help, and enough context to answer real questions without playing copy-paste intern for half an hour.
That is the whole point of @madebyaime/mcp-db-explorer.
It gives an MCP-compatible assistant read-only SQL visibility, which is a very different thing from giving it free rein over your database.
The sale here is not "AI for databases"
The sale is boundaries.
mcp-db-explorer exposes a small set of tools that are actually useful during debugging, analysis, and onboarding:
list_tablesdescribe_tablequeryget_schemaget_sample_dataexplain_query
That means the model can inspect tables and views, look at column definitions and constraints, pull sample rows, export schema DDL, run read-only queries, and inspect an execution plan.
What it does not expose is the more terrifying half of SQL.
Write operations are blocked. That includes INSERT, UPDATE, DELETE, DROP, ALTER, CREATE, TRUNCATE, REPLACE, MERGE, GRANT, REVOKE, CALL, transaction commands, and the usual other stuff you do not want an assistant improvising with.
The query validator also rejects sketchy patterns like INTO OUTFILE, LOAD_FILE, BENCHMARK(), SLEEP(), WAITFOR, pg_sleep, copy ... to, and suspicious multi-statement patterns.
Only SELECT, EXPLAIN, and WITH queries are allowed.
That is a much saner threat model than "here, model, please be spiritually mature around my production data."
What it actually looks like in practice
Install it:
npm install -g @madebyaime/mcp-db-explorer
Start it against Postgres:
DATABASE_URL="postgresql://user:***@localhost:5432/mydb" npx @madebyaime/mcp-db-explorer
A small but important detail: the connection string lives in an environment variable.
That means the AI gets tool access, not your raw credential pasted into chat. On PostgreSQL, the adapter also sets statement_timeout and query_timeout to 30000 ms. On SQLite, it opens the database in read-only mode.
There is also a default row cap of 100, which is exactly the kind of boring safety feature you only appreciate after the first time a model tries to grab half your database because you asked a vague question.
Demo: inspect a real-ish ecommerce schema
Let’s use a believable app schema with these objects:
usersordersproductsorder_itemsactive_usersview
This is the kind of database where you usually want quick orientation before you ask smarter questions.
1. First question: what is even in this database?
Prompt:
Use list_tables and show me the tables and views in this database.
Example output:
{
"count": 5,
"tables": [
{ "name": "users", "type": "table" },
{ "name": "orders", "type": "table" },
{ "name": "products", "type": "table" },
{ "name": "order_items", "type": "table" },
{ "name": "active_users", "type": "view" }
]
}
That already beats the normal workflow where you tab over to your SQL client, poke around manually, then come back and explain the schema to the model like you're translating for a very smart tourist.
2. Next question: what does the users table actually look like?
Prompt:
Describe the users table. I want columns, types, nullable fields, and indexes.
Example output:
{
"name": "users",
"type": "table",
"columns": [
{ "name": "id", "type": "integer", "nullable": false, "isPrimaryKey": true },
{ "name": "email", "type": "character varying(255)", "nullable": false, "isUnique": true },
{ "name": "name", "type": "character varying(100)", "nullable": true, "isPrimaryKey": false },
{ "name": "created_at", "type": "timestamp without time zone", "nullable": false, "defaultValue": "now()" }
],
"indexes": [
{ "name": "users_email_key", "columns": ["email"], "isUnique": true },
{ "name": "users_pkey", "columns": ["id"], "isUnique": true }
]
}
This is where the tool becomes immediately useful.
Instead of the model hallucinating what a users table probably looks like, it can inspect the real thing.
That alone cuts a lot of garbage out of AI-assisted debugging.
3. Then ask for sample data, without giving it the whole database
Prompt:
Get 5 sample rows from orders so I can see the shape of the data.
Example output:
{
"columns": ["id", "user_id", "status", "total", "created_at"],
"rows": [
{ "id": 1012, "user_id": 44, "status": "paid", "total": 89.00, "created_at": "2026-04-15T12:10:22Z" },
{ "id": 1013, "user_id": 18, "status": "pending", "total": 42.50, "created_at": "2026-04-15T12:14:09Z" },
{ "id": 1014, "user_id": 91, "status": "paid", "total": 199.99, "created_at": "2026-04-15T12:17:44Z" }
],
"rowCount": 3,
"truncated": false
}
Again, boring in a good way.
You can inspect shape and values without letting the assistant go full vacuum-cleaner mode across the whole table.
4. Finally: help me understand a slow join
This is the part people actually want when they say they want "AI database help."
Prompt:
Explain this query plan for me:
SELECT *
FROM orders o
JOIN order_items oi ON o.id = oi.order_id
WHERE o.status = 'pending';
Example output:
{
"plan": "Hash Join\n -> Seq Scan on order_items\n -> Index Scan using orders_status_idx on orders"
}
Now the model can tell you what it sees, point at the join path, and help reason about indexing or row volume without ever being allowed to mutate anything.
That is the real value proposition here.
Not "AI can do SQL now."
More like: "AI can finally see enough of your database to be useful, without being trusted like a DBA."
What this is safer than
It is safer than pasting credentials into a chat window and hoping the model behaves.
It is safer than giving a general-purpose agent broad database access and praying it understood the assignment.
It is safer than manually dumping schema into prompts and then wondering whether the model is answering from your actual database or from vibes.
And it is safer because the product surface is narrow on purpose.
That matters.
What you still need to be careful about
Read-only is not the same thing as privacy-safe.
If you point this at sensitive tables, the model can still read sensitive data. get_sample_data is useful, but it can also show real customer records if you aim it at the wrong thing.
So the honest version is this: mcp-db-explorer reduces write risk. It does not magically remove every other security concern.
You still need to think about which database you connect, which tables are available, whether the data is sanitized, and whether an assistant should be seeing any of it in the first place.
Also, a row cap and query timeout are guardrails, not moral virtues. They help. They do not replace good judgment.
If you want the useful part without the stupid part
That is exactly where this tool fits.
You get schema inspection, sample data, read-only querying, and query-plan visibility. You do not get write access, free-form mutation, or a model wandering into destructive SQL because it felt inspired.
If that sounds like the line you wanted all along, grab it here: <MCP_DB_EXPLORER_URL>
mcp-db-explorer gives AI assistants enough database context to be useful, without giving them enough power to wreck your day.
Need safe database visibility without write-risk chaos?
Database Suite MCP is the closest live MBA offer to the db-explorer demo: read-first inspection, schema visibility, and a cleaner path into real database workflows.
Get Database Suite MCP →Not sure which lane fits yet? Start with the Agent OS audit and get a practical next-step instead of another generic tool list.