MakeMyStats
Blog
Back to blog

How to convert CSV to JSON without uploading your data

Why server-side CSV-to-JSON converters are a privacy and performance problem, how browser-native conversion works under the hood, and a walkthrough of MakeMyStats's client-side converter.

Try /csv-to-json

You have a CSV file. You need it as JSON. You search "csv to json converter," click the first result, and upload your file to a website you've never heard of. The JSON comes back a few seconds later. Job done — except your data just traveled to a server in who-knows-where, got processed by code you can't inspect, and may or may not have been logged, cached, or stored.

For a throwaway file full of test data, that's probably fine. For a CSV containing customer emails, revenue numbers, employee records, or anything pulled from a production database, it's a gamble you don't need to take.

The problem with server-side converters

Most online CSV-to-JSON tools work the same way: your browser uploads the file to a server, the server parses it, and the server sends back the result. This round-trip introduces three problems.

Privacy. Your data leaves your machine. Even if the service promises not to store it, you're trusting their infrastructure, their logging configuration, and their ops team. If the service sits behind a CDN, your file might get cached at an edge node. If they use a cloud function, the cloud provider's logging defaults might capture request payloads. You can't verify any of this.

Performance. Uploading a 50MB CSV, waiting for the server to parse it, and then downloading a 150MB JSON response is slow. You're bottlenecked by your upload speed (often 10–20x slower than download speed on consumer connections), plus server processing time, plus the download. A file that takes 2 seconds to parse locally might take 30+ seconds round-trip. On a metered connection, you're also burning data twice — once for the upload, once for the download.

Availability. Server-side tools go down. They rate-limit you. They add CAPTCHAs. They gate file size behind paid plans. They deprecate their APIs. Your workflow breaks because someone else's server had a bad day. A tool that runs in your browser works offline, works on a plane, and works when the startup behind it runs out of runway.

How browser-native CSV parsing actually works

Modern browsers are fast enough to parse CSV files directly in JavaScript. The key tool is PapaParse, an open-source CSV parser that handles the gnarly edge cases of the CSV format — quoted fields, escaped delimiters, embedded newlines, different line endings, BOM markers — and does it quickly.

Here's what happens when you drop a CSV file onto a browser-based converter:

  1. File access without upload. The browser's File API gives JavaScript a reference to the file on disk. No network request happens. The file data stays on your machine.

  2. Parsing. PapaParse reads the file and tokenizes it into rows and columns. For small files (under ~10MB), this happens in one pass on the main thread — fast enough that you don't notice. For larger files, PapaParse can run in streaming mode inside a Web Worker, which keeps the UI responsive while parsing happens in the background.

  3. Delimiter detection. CSV isn't really a single format. Fields might be separated by commas, tabs, semicolons, or pipes. PapaParse auto-detects the delimiter by sampling the first few rows. You don't need to know or specify what separator your file uses.

  4. Conversion. Once the data is parsed into a JavaScript array of objects (one object per row, keyed by the header values), converting to JSON is just JSON.stringify. The heavy lifting was the parsing — the JSON serialization is trivial.

  5. Output. The resulting JSON string is displayed in the browser and available for download. The download uses a Blob URL — the browser creates a temporary in-memory file that you save to disk. Again, no network involved.

The entire pipeline — from raw CSV bytes to formatted JSON output — runs inside your browser tab. Open your dev tools, check the Network tab, and you'll see zero data transfer.

Walkthrough: using the MakeMyStats CSV-to-JSON converter

The CSV to JSON converter uses a two-pane layout. CSV goes on the left, JSON appears on the right.

Getting data in. You have two options: paste CSV text directly into the left editor, or click the file upload toggle and drop a .csv or .tsv file. The conversion happens live — as you type or paste, the JSON on the right updates immediately.

Header row toggle. By default, the converter treats the first row as column headers and produces an array of objects:

[
  { "name": "Alice", "age": "30", "city": "Portland" },
  { "name": "Bob", "age": "25", "city": "Denver" }
]

Turn off the header toggle, and you get an array of arrays instead:

[
  ["name", "age", "city"],
  ["Alice", "30", "Portland"],
  ["Bob", "25", "Denver"]
]

This matters when your CSV doesn't have headers, or when you want positional access instead of named fields.

Indent control. The indent selector switches between 2-space, 4-space, and tab indentation. Two spaces produces the most compact readable output. Tabs play nicely with editors configured for tab indentation. Four spaces is the default and what most JSON style guides recommend.

Download. The download button saves the output as a .json file. The filename defaults to data.json but you get to name it in the save dialog.

When JSON structure matters

The output format of CSV-to-JSON conversion isn't always a flat array of objects. A few things to think about:

Data types. CSV is text. Every cell value is a string. The converter doesn't try to guess whether "30" should be the number 30 or the string "30" — it keeps everything as strings. This is intentional: type coercion at the converter level leads to surprises. If you need typed values, parse them in your application code where you control the rules.

Nested structures. CSV is flat. If you need nested JSON, you'll need to transform the converter output in a second step. For the reverse direction — flattening nested JSON into CSV — the JSON to CSV converter has a flatten toggle that converts {"address": {"city": "Portland"}} into a column named address.city.

Empty cells. Empty CSV cells become empty strings in JSON (""), not null. This matches what the raw data actually contains — an empty field, not a missing one. If your downstream code needs null, a quick .map() over the result handles it.

Large files. The converter works well with files up to a few hundred megabytes, depending on your browser's available memory. The parsing itself is fast — PapaParse can chew through a million-row CSV in a few seconds. The bottleneck for very large files is the JSON output, which is typically 3–5x larger than the CSV input because of the repeated key names and JSON syntax overhead.

Compared to command-line tools

If you already have jq, csvtojson, or Python's csv module in your workflow, you probably don't need a web tool. Command-line tools are faster for batch processing, can be scripted, and handle arbitrarily large files by streaming.

The browser-based converter fills a different niche: quick, one-off conversions where you don't want to open a terminal, remember the right flags, or install a tool. Drop the file, get the JSON, move on.

It's also useful when you're on a machine where you can't install software — a locked-down work laptop, a Chromebook, or someone else's computer. The converter runs anywhere a modern browser does.

The privacy guarantee

MakeMyStats's CSV-to-JSON converter processes your data entirely in your browser. No file upload, no server-side processing, no analytics on your file contents. You can verify this yourself: open your browser's developer tools, switch to the Network tab, drop a file, and confirm that no request is made.

Your data stays on your machine. That's not a feature we had to build — it's a consequence of the architecture. There's no server to send data to.

Try the CSV to JSON converter.