API Reference

Scoring API Docs

ScoreContent exposes four browser-based scoring functions from lib/scoring.js. These functions run entirely in the browser with no network calls and no API keys. They are available within the ScoreContent codebase and can be copied into any JavaScript project.

Quick Import
import { calcGrammar, calcSpelling, calcHuman, calcSEO } from '@/lib/scoring'
📝
Module

Grammar

calcGrammar(text)

Evaluates syntactic and grammatical quality. Detects passive voice, run-on sentences, fragments, subject–verb agreement errors, redundant phrases, and sentence complexity.

Parameters

textstring
Plain text content to evaluate.

Returns

scorenumber
0–100 grammar quality score.
issuesIssue[]
Array of detected issues with type, label, and detail.
metricsMetric[]
Passive voice count, avg sentence length, active voice ratio, run-ons, fragments, redundant phrases.

Example

import { calcGrammar } from '@/lib/scoring'

const result = calcGrammar("He are going to the store. This is a very, very long sentence that has too many clauses because it keeps going on and on without stopping.")

// result.score      → 68
// result.issues     → [{ k: 'error', l: 'Subject-verb error', d: 'he/she/it are → is' }, ...]
// result.metrics    → [{ label: 'Passive Voice', val: 0, note: 'Good', good: true }, ...]
🔤
Module

Spelling

calcSpelling(text)

Checks spelling accuracy against a 60+ word misspelling dictionary, detects repeated consecutive words, overused vocabulary, double punctuation, and calculates spelling accuracy percentage.

Parameters

textstring
Plain text content to check.

Returns

scorenumber
0–100 spelling accuracy score.
issuesIssue[]
Misspellings with corrections, repeated words, overused terms.
metricsMetric[]
Misspelled count, accuracy %, repeated words, overused words, vocabulary size.

Example

import { calcSpelling } from '@/lib/scoring'

const result = calcSpelling("I beleive this is a definately good idea. The the problem is recieve.")

// result.score    → 52
// result.issues   → [{ k: 'error', l: '3 misspelled word(s)', d: '"beleive" → "believe" · ...' }]
// result.metrics  → [{ label: 'Misspelled Words', val: 3, note: '3 found', good: false }, ...]
🧬
Module

Human / AI Detection

calcHuman(text)

Analyses 12 linguistic signals to determine whether text reads as human-written or AI-generated. Returns human score, AI probability, naturalness, engagement, and per-signal breakdown.

Parameters

textstring
Plain text (minimum 20 words recommended).

Returns

scorenumber
Human score 0–100. Higher = more human.
aiProbabilitynumber
Estimated AI probability (0–100).
naturalnessScorenumber
Composite naturalness 0–100.
engagementScorenumber
Composite engagement 0–100.
signalsRecord<string,number>
12 named signals: Rhythm Variance, Vocab Richness, Personal Voice, Contractions, Active Voice, Punctuation Depth, Questions Used, Phrase Originality, Sentence Starts, AI Transitions, Narrative Flow, Conversational Tone.
issuesIssue[]
AI clichés detected, tone flags, suggestions.
metricsMetric[]
Human score, AI probability, naturalness, engagement, AI clichés, vocab diversity.

Example

import { calcHuman } from '@/lib/scoring'

const result = calcHuman("Furthermore, it is important to note that leveraging cutting-edge solutions plays a crucial role in today's world.")

// result.score           → 28  (likely AI)
// result.aiProbability   → 72
// result.signals         → { 'Rhythm Variance': 22, 'AI Transitions': 14, ... }
// result.issues          → [{ k: 'warn', l: 'AI clichés: 4 found', d: '"furthermore,", "cutting-edge"...' }]
📈
Module

SEO

calcSEO(text, html)

Evaluates search engine optimisation quality across word count, heading structure (H1/H2/H3), transition word density, lists, paragraph structure, Flesch-Kincaid readability, keyword density, and bold text usage.

Parameters

textstring
Plain text content.
htmlstring
HTML string (from innerHTML) for tag-level analysis.

Returns

scorenumber
0–100 SEO quality score.
readabilityScorenumber
Flesch-Kincaid readability 0–100.
keywordScorenumber
Keyword density/repetition score 0–100.
structureScorenumber
Heading structure score 0–100.
topKeywords[string, number][]
Top 5 repeated keywords with counts.
issuesIssue[]
Word count tier, heading gaps, transition word count, readability grade.
metricsMetric[]
SEO score, readability, keyword score, structure, word count, transitions.

Example

import { calcSEO } from '@/lib/scoring'

const text = document.getElementById('editor').innerText
const html = document.getElementById('editor').innerHTML
const result = calcSEO(text, html)

// result.score           → 74
// result.readabilityScore→ 68
// result.topKeywords     → [['content', 8], ['score', 5], ...]
// result.issues          → [{ k: 'good', l: 'Word count: 1240', d: 'Good. 1500+ is ideal.' }, ...]

Type Definitions

type Issue
k:"good" | "warn" | "error" | "tip"Severity level.
l:stringShort label for the issue.
d:stringDetailed description or suggestion.
type Metric
label:stringDisplay name.
val:string | numberMetric value.
note:stringShort verdict text.
good:booleanWhether the metric is in good range.

Try it in the editor

All four modules run live as you type. No setup, no key, no server.

Open Editor →Contact Us