🍋
Menu
Best Practice Beginner 1 min read 253 words

CSV Data Cleaning: Common Pitfalls and Solutions

CSV files are deceptively simple. Embedded commas, inconsistent quoting, mixed encodings, and trailing whitespace cause silent data corruption during processing.

The Illusion of Simplicity

CSV appears to be the simplest data format — just commas separating values. In practice, there is no single CSV standard. Different applications produce slightly different dialects with different quoting rules, escape characters, and line endings.

Embedded Delimiters

When a field value contains a comma (e.g., an address like "123 Main St, Suite 4"), the field must be quoted. But quoting rules vary: some producers double-quote fields containing commas, others backslash-escape the comma, and some don't handle this case at all, silently splitting one field into two.

Encoding Issues

CSV files may be UTF-8, Latin-1, Windows-1252, or Shift-JIS, and they rarely declare their encoding. A file that opens fine in Excel may produce garbled text in a Python script or vice versa. The BOM (Byte Order Mark) at the start of UTF-8 files helps identify encoding but can cause issues if not handled properly.

Whitespace Problems

Leading and trailing whitespace in fields creates phantom mismatches: "Smith" and " Smith" are different strings. Some tools strip whitespace automatically; others preserve it. Trim all fields during import and establish consistent whitespace handling early in your data pipeline.

Practical Cleaning Checklist

  1. Detect encoding (chardet library or similar) and convert to UTF-8. 2. Identify the delimiter, quoting character, and escape character. 3. Trim whitespace from all fields. 4. Validate field counts match header columns. 5. Check for and handle embedded newlines within quoted fields. 6. Verify numeric fields parse as numbers and date fields parse as dates.

Herramientas relacionadas

Formatos relacionados

Guías relacionadas