Starting in about 2015, the field of natural language processing (NLP) was revolutionized by deep neural techniques. Although classical techniques (like word stemming — for example reducing “cities” to “city”) and statistical techniques (like latent Dirichlet Allocation topic analysis) are still important, neural techniques have made huge progress in NLP after decades of relative stagnation.
When I think of neural-based NLP, I usually categorize techniques based on type of problem to be solved. Briefly:
I. High Level Problems
1. Question Answering Ex: Given a company’s employee manual and a question, give the answer.
2. Automatic Summarization Ex: Given a news story, generate a three-sentence summary.
3. Topic Extraction Ex: Given a legal contract, generate a one-phrase topic for each section.
4. Text Generation Ex: Given a novel, generate an artificial novel in the same style.
5. Translation Ex: Translate a Wikipedia page from English to Norwegian.
6. Sentiment Analysis Ex: Given a movie review, determine if it’s positive, negative, or neutral.
II. Low Level Problems
1. Part of Speech Ex: Given a paragraph, label each word as noun, verb, adverb, etc.
2. Word Embeddings Ex: Given a set of bank documents, map each word to a 100-value vector.
3. Optical Character Recognition Ex: Convert a JPG image of a document into digital text format.
4. Speech to Text Ex: Convert an audio file of a college lecture into digital text format.
It’s human nature to categorize all things. Homemade Star Wars costume categories from left to right: clever, creative, elaborate, inventive.

.NET Test Automation Recipes
Software Testing
SciPy Programming Succinctly
Keras Succinctly
R Programming
2026 Visual Studio Live
2025 Summer MLADS Conference
2026 DevIntersection Conference
2025 Machine Learning Week
2025 Ai4 Conference
2026 G2E Conference
2026 iSC West Conference
You must be logged in to post a comment.