Data is the new oil, but for most legacy enterprises, it looks more like sludge.
We’ve all heard the mandate: "Use AI to unlock insights from our historical data!" Then you open the database, and it’s a horror show. 20 years of maintenance logs, customer support tickets, or field reports entered by humans who hated typing.
You see variations like:
If you feed this directly into an LLM or a standard classifier, you get garbage. The context is lost in the noise.
In this guide, based on field research regarding Vehicle Maintenance Analysis, we will build a pipeline to clean, vectorize, and analyze unstructured "free-text" logs. We will move beyond simple regex and use TF-IDF and Cosine Similarity to detect fraud and operational inconsistencies.
We are dealing with Atypical Data, unstructured text mixed with structured timestamps. Our goal is to verify if a "Required Task" (Standard) was actually performed based on the "Free Text Log" (Reality).
Here is the processing pipeline flow:
Legacy systems are notorious for encoding issues. You might have full-width characters, inconsistent capitalization, and random special characters. Before you tokenize, you must normalize.
We use NFKC (Normalization Form Compatibility Decomposition) to standardize characters.
import unicodedata import re def normalize_text(text): if not isinstance(text, str): return "" # 1. Unicode Normalization (Fixes width issues, accents, etc.) text = unicodedata.normalize('NFKC', text) # 2. Case Folding text = text.lower() # 3. Remove noise (e.g., special chars that don't add semantic value) # Keeping alphanumeric and basic punctuation text = re.sub(r'[^a-z0-9\s\-/]', '', text) return text.strip() # Example raw_log = "Oil Change (5W-30)" # Full-width chars print(f"Cleaned: {normalize_text(raw_log)}") # Output: Cleaned: oil change 5w-30
General-purpose NLP libraries (like NLTK or spaCy) often fail on industry jargon. To an LLM, "CVT" might mean nothing, but in automotive terms, it means "Continuously Variable Transmission."
You need a Synonym Mapping (Thesaurus) to align the free-text logs with your standard columns.
**The Logic: \ Map all variations to a single "Root Term."
# A dictionary mapping variations to a canonical term thesaurus = { "transmission": ["trans", "tranny", "gearbox", "cvt"], "air_filter": ["air element", "filter-air", "a/c filter"], "brake_pads": ["pads", "shoe", "braking material"] } def apply_thesaurus(text, mapping): words = text.split() normalized_words = [] for word in words: replaced = False for canonical, variations in mapping.items(): if word in variations: normalized_words.append(canonical) replaced = True break if not replaced: normalized_words.append(word) return " ".join(normalized_words) # Example log_entry = "replaced cvt and air element" print(apply_thesaurus(log_entry, thesaurus)) # Output: replaced transmission and air_filter
Now that the text is consistent, we need to turn it into math. We use TF-IDF (Term Frequency-Inverse Document Frequency).
Why TF-IDF instead of simple word counts? \n Because in maintenance logs, words like "checked," "done," or "completed" appear everywhere. They are high frequency but low information. TF-IDF downweights these common words and highlights the unique components (like "Brake Caliper" or "Timing Belt").
from sklearn.feature_extraction.text import TfidfVectorizer # Sample Dataset documents = [ "replaced transmission fluid", "changed engine oil and air_filter", "checked brake_pads and rotors", "standard inspection done" ] # Create the Vectorizer vectorizer = TfidfVectorizer() tfidf_matrix = vectorizer.fit_transform(documents) # The result is a matrix where rows are logs, and columns are words # High values indicate words that define the specific log entry
Here is the business value. \n You have a Bill of Materials (BOM) or a Checklist that says "Brake Inspection" occurred. \n You have a Free Text Log that says "Visual check of tires."
Do they match? If we rely on simple keyword matching, we might miss context. Cosine Similarity measures the angle between the two vectors, giving us a score from 0 (No match) to 1 (Perfect match).
The Use Case: Fraud Detection. If a service provider bills for a "Full Engine Overhaul" but the text log is semantically dissimilar (e.g., only mentions "Wiper fluid"), we flag it.
from sklearn.metrics.pairwise import cosine_similarity def verify_maintenance(checklist_item, mechanic_log): # 1. Preprocess both inputs clean_checklist = apply_thesaurus(normalize_text(checklist_item), thesaurus) clean_log = apply_thesaurus(normalize_text(mechanic_log), thesaurus) # 2. Vectorize # Note: In production, fit on the whole corpus, transform on these specific instances vectors = vectorizer.transform([clean_checklist, clean_log]) # 3. Calculate Similarity score = cosine_similarity(vectors[0], vectors[1])[0][0] return score # Scenario A: Good Match checklist = "Replace Air Filter" log = "Changed the air element and cleaned housing" score_a = verify_maintenance(checklist, log) print(f"Scenario A Score: {score_a:.4f}") # Result: High Score (e.g., > 0.7) # Scenario B: Potential Fraud / Error checklist = "Transmission Flush" log = "Wiped down the dashboard" score_b = verify_maintenance(checklist, log) print(f"Scenario B Score: {score_b:.4f}") # Result: Low Score (e.g., < 0.2)
By implementing this pipeline, you convert "Dirty Data" into a structured asset.
The Real-World Impact:
Don't let your legacy data rot in a data swamp. Clean it, vector it, and put it to work.


