Future

Cover image for How to Review AI‑Generated Solidity Like an Auditor (For Beginners)
Ribhav
Ribhav

Posted on • Originally published at Medium

How to Review AI‑Generated Solidity Like an Auditor (For Beginners)

Yesterday’s contract was vibey. You described what you wanted, hit enter, and boom a working Solidity draft showed up on your screen.

No blank file.
No “where do I even start?”
Just a StudyBuddy‑style contract ready to deploy… or so it looked.

The problem is: blockchains don’t care about vibes. They care about exact behavior.

If an AI quietly forgets an access check, or updates balances after sending funds, or makes a bad assumption about an external call, the chain will happily execute that mistake for as long as the contract lives.

So today’s question is different from “how do I get AI to write Solidity?”

It’s: how do I read and review AI‑generated Solidity like a beginner auditor?

This is Day 29 of the 60‑Day Web3 journey, still in Phase 3: Development. The goal: turn you from “AI co‑pilot user” into someone who can look at a smart contract and say, “Here’s where this could break, and here’s what I’d change before trusting it.”


1. Why AI‑generated Solidity needs extra skepticism

AI coding tools are amazing at removing friction, but they don’t understand money, risk, or incentives. They just pattern‑match code that looks right.

That leads to a few recurring issues:

  • They’re trained mostly on web2 code, with Solidity sprinkled in later, so they sometimes pull in patterns that make no sense on‑chain (like trusting external systems blindly or ignoring gas).
  • They’re good at standard boilerplate (ERC‑20, basic Ownable patterns) but shaky with custom logic, complex flows, and edge cases.
  • They often produce confidently wrong code: it compiles, maybe even passes naive tests, but fails under adversarial or weird inputs.

If you treat AI as a “drop this into mainnet” machine, you’re asking for trouble.

If you treat AI as a fast junior dev and yourself as the reviewer, it becomes a superpower.


2. Shift into reviewer mindset

Before you even scan through the lines, you need a mental shift.

You’re not asking, “Does this compile?”

You’re asking, “How could someone break this?”

A simple reviewer mindset:

  • Assume the contract is wrong until you’ve convinced yourself it’s safe enough for its purpose. Especially if money moves through it.
  • Look for powers first, not syntax:
    • Who can move assets?
    • Who can change parameters?
    • Who can withdraw what, and when?
  • Prefer boring, clear logic over clever tricks. A “boring but understandable” contract is almost always safer than a “clever but confusing” one.

If you can’t explain what the contract does, in plain English, you do not understand it well enough to deploy it.


3. The 5‑step review checklist

Let’s say the AI just handed you a StudyBuddy‑style contract where users can log study sessions on‑chain. It compiles, functions run, everything seems fine.

Here’s a review flow you can run on any AI‑generated contract.

Step 1: Map the public surface

Start by listing:

  • State variables

    • What does this contract store?
    • Are there balances, ownership mappings, arrays that grow forever?
  • Public and external functions

    • Which functions can anyone call?
    • What do they change — state, balances, ownership, configuration?
  • Events

    • What actions are observable on‑chain?
    • Are important actions (like withdrawals or admin changes) emitting events?

The question you’re answering:

“If I only see the public interface, what powers does a random address have over this contract and its data?”

Red flags at this step:

  • A function that clearly looks like an admin action but is public and unprotected.
  • No events for sensitive operations, which makes later forensics and monitoring harder.

Step 2: Find trust boundaries and external calls

Next, mark every place where the contract talks to something else:

  • Calling another contract’s function.
  • Sending ETH or tokens out.
  • Doing anything with call, delegatecall, or low‑level interactions.

Each of these is a trust boundary.

Questions to ask:

  • “What if this external contract is malicious or buggy?”
  • “Can it call back into my contract before my function has finished?”
  • “What assumptions am I making about its behavior?”

This is where reentrancy‑style issues appear:

  • Your code sends funds before updating balances.
  • The recipient is actually a contract with a fallback that calls you back again.
  • Your logic runs twice in the same transaction, letting them drain more than they should.

If you see a pattern like:

(bool ok, ) = _to.call{value: amount}("");
require(ok, "Send failed");
balances[msg.sender] -= amount;
Enter fullscreen mode Exit fullscreen mode

…that’s a classic “update state after interaction” smell. You’d want to:

  • Move the state change before the external call (checks‑effects‑interactions), or
  • Use a nonReentrant modifier from a battle‑tested library.

Step 3: Check access control carefully

Now zoom in on who is allowed to do what.

Identify all functions that:

  • Change critical config (fees, token addresses, oracles, price feeds).
  • Move funds out of the contract.
  • Grant roles or change ownership.
  • Pause/unpause or upgrade things.

Then verify:

  • Is there a clear guard (onlyOwner, role modifier, or similar) on each of these functions?
  • Does the guard itself do the right check (for example, msg.sender == owner, not tx.origin)?
  • Are there any “backdoor” functions that grant the same power without checks?

AI‑generated contracts often make mistakes like:

  • Completely forgetting a modifier on a sensitive function.
  • Using tx.origin for auth (which is dangerous and discouraged).
  • Hardcoding an address instead of storing it in state, which breaks flexibility and upgradability.

If a function can drain funds or change core logic, treat missing or weak access control as a deal‑breaker until fixed.

Step 4: Think in invariants, not just lines

Instead of reading line‑by‑line and hoping to notice issues, define a few invariants and try to break them.

Examples:

  • “Total recorded balances can never exceed what the contract actually holds.”
  • “No user should ever withdraw more than they have deposited.”
  • “Ownership of a resource can’t be duplicated.”

Then:

  • Walk through each function mentally and see if you can violate these invariants.
  • Check that every code path that moves funds or changes balances preserves them.
  • Consider what happens if external calls fail, revert, or behave weirdly.

If you can’t answer questions like:

  • “Why can’t this mapping be left in a bad state?”
  • “Why can’t this array grow forever and DOS the function?”

…you’ve found areas that need either refactoring or stronger checks.

Step 5: Attack with edge cases

Finally, mentally fuzz the contract.

Try:

  • Calling functions with zero values or empty strings.
  • Calling functions many times in a row (or many times in one block).
  • Passing in large values, or edge‑case indices for arrays.
  • Stressing loops or operations that scale with user data.

Look for:

  • Loops over unbounded arrays or mappings that could become too expensive to run.
  • Functions that assume “this list will always be small” without enforcing it.
  • Places where the contract assumes a token behaves perfectly (always returning true or never reverting).

AI loves the happy path.

Your reviewer job is to explore the unhappy paths.


4. Common AI‑generated mistakes to watch for

You don’t need to know every bug class yet. Just start by hunting for a few common patterns that keep showing up in AI‑assisted Solidity:

  • Reentrancy‑prone flows

    • External calls before state updates.
    • Multiple external calls in a single function without any guard or pattern to structure them.
  • Missing access control

    • Admin‑looking functions (setX, changeY, withdrawAll) that are public or external with no modifiers.
    • Using tx.origin instead of msg.sender for security decisions.
  • Unsafe external assumptions

    • Ignoring return values from external calls.
    • Assuming a token’s transfer or transferFrom always behaves the same way.
  • Lack of basic input validation

    • Accepting zero addresses where they make no sense (like new admin or token addresses).
    • Not checking ranges for numeric inputs (negative values don’t exist, but “too big” or “nonsensical” ones do).
  • Messy separation of concerns

    • Functions that handle multiple responsibilities at once (calculate, update, send funds, emit events), which makes both testing and review harder.

Whenever you spot one, treat it as a prompt for a refactor:

  • Add the missing modifier.
  • Restructure the function to follow checks‑effects‑interactions.
  • Break complex functions into smaller ones.
  • Or, if the contract is too messy, delete and regenerate with a more constrained prompt.

5. Turn this into your personal review routine

You don’t become an auditor in a day. But you can start acting more like one every time you touch AI‑generated Solidity.

Here’s a lightweight routine you can reuse:

  1. Print or paste the contract into your editor and skim the public surface.
  2. Mark external calls and trust boundaries with comments or highlights.
  3. Circle all admin‑like functions and verify their access control.
  4. Write down 3–5 invariants the contract should always respect.
  5. Try to break those invariants by imagining malicious or weird usage.
  6. Note every unclear piece of logic, and either:
    • Simplify it,
    • Replace it with a known pattern, or
    • Decide this contract is not ready for real value.

You can even paste your AI‑generated contract back into your AI assistant and say:

“Here is a checklist: surface, trust boundaries, access control, invariants, edge cases. Help me walk through each step and identify potential problems.”

The point is not to eliminate AI from your Solidity workflow.

The point is to make sure that you remain the one in charge of safety.

Tomorrow, we’ll zoom in on classic smart contract security bugs (like reentrancy) and see how they show up in real code — including AI‑generated code — so you can spot them before an attacker does.



Further reading



Enter fullscreen mode Exit fullscreen mode

Top comments (0)