Draft:LlamaCon 2025
Submission declined on 31 May 2025 by Gobonobo (talk). This submission is not adequately supported by reliable sources. Reliable sources are required so that information can be verified. If you need help with referencing, please see Referencing for beginners and Citing sources. This draft's references do not show that the subject qualifies for a Wikipedia article. In summary, the draft needs multiple published sources that are:
Where to get help
How to improve a draft
You can also browse Wikipedia:Featured articles and Wikipedia:Good articles to find examples of Wikipedia's best writing on topics similar to your proposed article. Improving your odds of a speedy review To improve your odds of a faster review, tag your draft with relevant WikiProject tags using the button below. This will let reviewers know a new draft has been submitted in their area of interest. For instance, if you wrote about a female astronomer, you would want to add the Biography, Astronomy, and Women scientists tags. Editor resources
| ![]() |
Submission declined on 21 May 2025 by Twistedmath (talk). Your draft shows signs of having been generated by a large language model, such as ChatGPT. Their outputs usually have multiple issues that prevent them from meeting our guidelines on writing articles. These include: Declined by Twistedmath 10 days ago.
| ![]() |
Comment: In accordance with Wikipedia's Conflict of interest policy, I disclose that I have a conflict of interest regarding the subject of this article. Saichand Raghupatrini (talk) 03:51, 16 May 2025 (UTC)
Llama Guard 4 is a 12-billion-parameter, dense multimodal safety model capable of analyzing both text and image inputs. It is designed to detect and filter unsafe content in user prompts and model responses, supporting multiple languages including English, French, German, Hindi, Italian, Portuguese, Spanish, and Thai. LinkedIn +5 Hugging Face +5 Toolify +5 Hugging Face
Key Features:
Multimodal Analysis: Evaluates both text and images to identify inappropriate or harmful content. Hugging Face
Multilingual Support: Trained to recognize unsafe content across various languages, enhancing global applicability.
Integration Capability: Can be incorporated into AI pipelines to assess inputs before they reach the model and to filter outputs before they are presented to users. Hugging Face
Open-Source Availability: Accessible through platforms like Hugging Face, allowing developers to integrate and customize the model as needed.
References
[edit]https://huggingface.co/blog/llama-guard-4?utm_source=chatgpt.com
https://www.llama.com/llama-protections/?utm_source=chatgpt.com