Skip to content
View d4rkf0x's full-sized avatar
πŸ’­
😎 Just so. πŸ‘€
πŸ’­
😎 Just so. πŸ‘€

Block or report d4rkf0x

Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
d4rkf0x/README.md
  • πŸ‘‹ Hi, I’m @d4rkf0x, an Ai Systems Engineer
  • πŸ’πŸΌ I am working on a custom WebUI Interface for gameing.
  • πŸ‘€ I’m interested in almost everything. Thanks for that.
  • 🌱 I’m currently learning Paranormal Investigations, @ The End of Trail Of Tears, Colorado
  • πŸ’žοΈ I’m looking to collaborate on Almost Anything That Helps the Human Race find Freedom.
  • πŸ“« How to reach me. d4rkfox6@x.com
  • πŸ˜„ Pronouns: They/them, 29Γ…
  • ⚑ Fun fact: I am currently employed.
  • πŸ‘οΈ Looking forward to moving on with my life.
  • 🎢 I am just now getting the married life, feel free to congratulate me when you see me IRL

Pinned Loading

  1. d4rkf0x d4rkf0x Public

    Config files for my GitHub profile.

  2. open-webui open-webui Public

    Forked from open-webui/open-webui

    User-friendly WebUI for AI (Formerly Ollama WebUI)

    JavaScript

  3. ollama ollama Public

    Forked from ollama/ollama

    Get up and running with Llama 3.1, Mistral, Gemma 2, and other large language models.

    Go

  4. confluence confluence Public

    Forked from haxqer/confluence

    The simplest docker file of Confluence. Support v8.9.5(latest) v9.0.1(latest) and v8.5.12(lts)

    Dockerfile

  5. genai-stack genai-stack Public

    Forked from docker/genai-stack

    Langchain + Docker + Neo4j + Ollama

    Python

  6. wllama wllama Public

    Forked from ngxson/wllama

    WebAssembly binding for llama.cpp - Enabling in-browser LLM inference

    C++