Integrating Hugging Face Models in Node.js for Text & Image Moderation
Getting Started with Hugging Face API Keys
When you want to integrate Hugging Face models into your Node.js app, the first thing is to get your API key. Go to Hugging Face, create an account (if you don’t have one), and from your profile, navigate to your API tokens. Grab the key because this is what allows you to interact with their powerful models.
Now, let’s see how you can do text moderation using the unitary/toxic-bert model and image moderation using the openai/clip-vit-base-patch32.
npm install axios dotenv cors
HUGGINGFACE_API_KEY=your_api_key_here
require('dotenv').config();
const axios = require('axios');
Text Moderation with unitary/toxic-bert
This model is great for detecting toxic or harmful language in text. It works by classifying text as toxic, non-toxic and many more. Let's write a function that takes in some text, sends it to Hugging Face’s model, and returns a response.
Recommended by LinkedIn
const moderateText = async (text) => {
try {
const response = await axios.post(
"https://api- inference.huggingface.co/models/unitary/toxic-bert",
{ inputs: text },
{
headers: { Authorization: Bearer ${process.env.HUGGINGFACE_API_KEY} },
}
);
return response.data;
} catch (error) {
console.error("Text moderation error:", error);
throw new Error("Text moderation failed");
}
};
In the moderateText function, we send a POST request to the model endpoint with the text as input. The API returns a prediction about whether the text is toxic or not. This function can be integrated into any app where you need to filter harmful content.
this is how the response will look like:
{
"moderationResult": {
"moderationResult": [
[
{
"label": "toxic",
"score": 0.0009314664057455957
},
{
"label": "insult",
"score": 0.00018483458552509546
},
{
"label": "obscene",
"score": 0.00017617459525354207
},
{
"label": "identity_hate",
"score": 0.00013983821554575115
},
{
"label": "threat",
"score": 0.0001067618650267832
},
{
"label": "severe_toxic",
"score": 0.0001051655926858075
}
]
],
"isHarmful": false
}
}
Image Moderation with openai/clip-vit-base-patch32
Now for image moderation, we use the CLIP model (openai/clip-vit-base-patch32). This model helps you understand the contents of an image, making it useful for detecting inappropriate visuals like explicit content or violence. Let’s create an image moderation function.
const moderateImage = async (imageBuffer) => {
try {
if (!Buffer.isBuffer(imageBuffer)) {
throw new Error('Image input must be a Buffer.');
}
if (imageBuffer.length === 0) {
throw new Error('Image buffer cannot be empty.');
}
const imageBase64 = imageBuffer.toString("base64");
const candidateLabels = [
'explicit content',
'violence',
'gore',
'drugs',
'weapons',
'hate symbols',
'safe content',
'normal photo',
'family friendly',
'appropriate content'
];
const response = await axios.post(
"https://api-inference.huggingface.co/models/openai/clip-vit-base-patch32",
{
inputs: imageBase64,
parameters: {
candidate_labels: candidateLabels
}
},
{
headers: {
Authorization: `Bearer ${process.env.HUGGINGFACE_API_KEY}`,
'Content-Type': 'application/json'
},
timeout: 30000
}
);
return (response.data)
} catch (error) {
if (axios.isAxiosError(error)) {
if (error.response) {
throw new Error(`Image moderation failed: ${error.response.status} - ${error.response.data.error || 'Unknown error'}`);
} else if (error.request) {
throw new Error('Image moderation failed: No response from server');
}
}
throw error;
}
};
the response will look like this:
{
"moderationResult": {
"moderationResult": [
{
"score": 0.4012407064437866,
"label": "hate symbols"
},
{
"score": 0.2357022911310196,
"label": "safe content"
},
{
"score": 0.09209553897380829,
"label": "gore"
},
{
"score": 0.08161406964063644,
"label": "drugs"
},
{
"score": 0.05585120618343353,
"label": "explicit content"
},
{
"score": 0.033427461981773376,
"label": "weapons"
},
{
"score": 0.03128299117088318,
"label": "family friendly"
},
{
"score": 0.027571413666009903,
"label": "appropriate content"
},
{
"score": 0.020892294123768806,
"label": "violence"
},
{
"score": 0.020322052761912346,
"label": "normal photo"
}
],
"isHarmful": true
}
}
Final Thoughts
This is how you can easily integrate Hugging Face models for both text and image moderation in your Node.js apps. Just make sure you have your API key, and you can use any model available on Hugging Face.
This setup is perfect if you’re building a platform that handles user-generated content and you want to keep everything safe and moderated automatically. Let Hugging Face’s AI do the heavy lifting for you!