Andrew Huang
Back to selected work
Oracle NetSuite final AI-assisted close management review screen.
Oracle NetSuite

Designing a faster way for finance teams to review month-end close

I designed and shipped a new Oracle NetSuite review experience for month-end close. It replaced a static checklist with a clearer workflow that helps finance leaders spot blockers, assess risk, and act faster.

Challenge

Month-end close review was spread across multiple tools.

Shipped

A 0-to-1 Close Manager workflow inside Oracle NetSuite.

Changed

One guided review surface for blockers, risk, and task assignment.

Role

Owned the core UX from concept exploration through handoff.

Context

Month-end close is a high-stakes workflow with a short decision window.

Finance leaders need fast, reliable answers about what is blocked, what is material, and where to act next.

Timeline

3 months

Team

PM, engineers, and finance SMEs

Users

CFOs, controllers, and accountants

My role

I owned the core UX from concept exploration through developer-ready handoff. I drove the shift from a static checklist to a real-time review experience, aligned the direction with PM, engineering, and SMEs, and delivered the final interaction model and specs.

Scope

Review experience, AI summaries, workflow states, handoff specs

Primary user

CFO

Needed a real-time view of close health and forecast risk so they could quickly decide where attention was needed during period close.

Secondary user

Controller

Needed to manage and expedite period-close tasks, identify exceptions quickly, and keep financial reporting accurate and on schedule.

Tertiary user

Accountant

Needed clearer task ownership, reminders, and due-state visibility so work could be completed on time without relying on manual follow-up.

Status quo

Before Close Manager, the workflow buried the next action inside manual coordination.

Finance leaders were checking status in multiple places, chasing updates, and piecing together what was holding the close up instead of moving it forward.

Status was spread across spreadsheets, manual reviews, and category systems.
No single view showed close health and next action together.
Manual coordination increased as categories and exceptions grew.
Process

I kept the public process to the three design moves that most changed the outcome: separating insight from action, improving comparison, and then resolving scale.

Initial direction

The first concept separated insight from action.

The initial direction looked comprehensive, but it made users connect risk to action on their own.

Overview, insight, and task detail all lived in separate zones, so finance leaders had to scan back and forth to understand what actually needed action.

Initial Oracle NetSuite concept showing summary cards, chart, and task table separated across the page.
01

Overview

Close health was visible, but too detached from the work that needed attention.

02

Insight

The summary sat beside the flow instead of guiding the next step.

03

Action

Users still had to translate risk into tasks on their own.

Explorations

The next concept improved overview, but only for two categories.

The two-column version made payables and receivables easier to compare side by side, and it brought summary and detail closer together. But it depended on the screen staying small and symmetrical, so it could not scale once more categories and richer AI states were added.

Oracle NetSuite two-column exploration showing payables and receivables side by side with summary and detail.
What it improved

This was the first direction that made progress easier to compare, but it still did not establish a scalable review flow.

Scalability issue

As categories grew, the review experience became harder to scan.

Adding more category cards and parallel AI summaries increased visibility, but it flattened hierarchy and made the next action less obvious.

Expanded Oracle review experience with many category cards and AI summaries competing for attention.
01

Clutter

Too many categories competed at once.

02

Parallel summaries

AI insight no longer had a clear place in the flow.

Constraint

More visibility was useful, but not if the next action disappeared.

Solution

I designed one place for finance leaders to monitor financial health, see real-time priorities, and assign work.

The experience brought close health, the highest-priority issues, and task assignment into one workflow so teams could close the books on time without the usual end-of-month stress.

Final Oracle NetSuite AI financial review layout with close status, selected category, AI summary, and task table in one guided review flow.
01

Global status

Overall close health stayed visible.

02

Selected category

The middle column kept the current workstream in focus.

03

Actionable detail

Task detail stayed adjacent to the summary.

Key product decision

Use AI to prioritize and explain risk, not automate financial decisions.

This gave finance leaders speed without losing trust or auditability.
AI-generated insights

AI summaries helped surface what needed attention first.

The AI did not make the decision. It helped finance leaders understand where to look before they took action.

Input

Financial report data from accounts payable and receivable, along with close-status signals from category workflows.

Output

A concise summary of close health and the specific areas that needed attention.

Human verification

Finance leaders could verify the summary through linked report sources before taking action.

Why not automate more?

Because these were high-stakes financial decisions, the AI highlighted what needed attention while the decision and action still stayed with finance leaders.

Impact

A shipped 0-to-1 workflow replaced the old static close review.

The outcome was a live review experience that gave finance leaders one place to assess close health, understand risk, and move work forward before the end-of-month scramble.

0-to-1

Close Manager workflow shipped inside Oracle NetSuite

1 review flow

for close health, priorities, and task assignment

Human verified

AI guidance stayed explainable before finance leaders acted

Proof 01

Replaced a manual multi-place checklist with a live review workflow.

Proof 02

Made blockers, ownership, and next action visible in one workflow.

Proof 03

Established a scalable pattern for AI guidance in an auditable enterprise product.

Learnings

What this project taught me

This project taught me that the hardest part of enterprise AI design is not adding more information. It is deciding what should stay visible, what should collapse, and what needs to become action.