| Name | Modified | Size | Downloads / Week |
|---|---|---|---|
| knoksPix-main.zip | 2025-09-11 | 419.1 kB | |
| telemetry.ts | 2025-09-10 | 864 Bytes | |
| tsconfig.json | 2025-09-10 | 571 Bytes | |
| types.ts | 2025-09-10 | 58 Bytes | |
| vite.config.ts | 2025-09-10 | 910 Bytes | |
| package-lock.json | 2025-09-10 | 425.4 kB | |
| prometheus.yml | 2025-09-10 | 194 Bytes | |
| README.md | 2025-09-10 | 13.3 kB | |
| SECURITY.md | 2025-09-10 | 3.3 kB | |
| metadata.json | 2025-09-10 | 246 Bytes | |
| package.json | 2025-09-10 | 2.3 kB | |
| index.tsx | 2025-09-10 | 1.6 kB | |
| jest.config.ts | 2025-09-10 | 615 Bytes | |
| jest.setup.ts | 2025-09-10 | 37 Bytes | |
| LICENSE | 2025-09-10 | 1.1 kB | |
| docker-compose.yml | 2025-09-10 | 1.2 kB | |
| env.d.ts | 2025-09-10 | 267 Bytes | |
| index.css | 2025-09-10 | 215 Bytes | |
| index.html | 2025-09-10 | 6.2 kB | |
| capacitor.config.ts | 2025-09-10 | 189 Bytes | |
| CHANGELOG.md | 2025-09-10 | 1.3 kB | |
| DEVELOPMENT.md | 2025-09-10 | 3.0 kB | |
| DISTRIBUTION.md | 2025-09-10 | 2.6 kB | |
| _env | 2025-09-10 | 259 Bytes | |
| _env.local | 2025-09-10 | 35 Bytes | |
| _gitattributes | 2025-09-10 | 66 Bytes | |
| _gitignore | 2025-09-10 | 369 Bytes | |
| App.tsx | 2025-09-10 | 23.1 kB | |
| capacitor.config.json | 2025-09-10 | 327 Bytes | |
| Totals: 29 Items | 908.9 kB | 0 |
KnoksPix – AI-Powered Image & Code Companion
KnoksPix is an AI-first creative workspace combining:
- Intelligent image editing & adjustment workflow
- Generative augmentation via Gemini / (optional) local Starcoder2 backend
- Cross‑platform delivery (Web + Desktop [Electron] + Mobile [Capacitor/Android])
- Extensible architecture (pluggable model backends & tools)
Demo: View the app in AI Studio
Table of Contents
- Core Features
- Screenshots
- Quick Start
- Frontend
- Backend API (Starcoder2)
- Deployment Options
- Preview Deployments
- Environment Variables
- Testing
- Architecture
- Security
- Release Checklist
- Contributing
- License
Features
| Area | Highlights |
|---|---|
| Image Editing | Crop, filters, adjustment panel, object layer cards |
| AI Integration | Gemini API for generation (pluggable) |
| Local LLM Option | Optional Starcoder2 backend with SSE streaming, OpenAI-compatible endpoint |
| Performance | Vite + React + Code-splitting |
| Cross Platform | Web, Electron desktop builds, Android (Capacitor) |
| Tooling | Jest tests, GitHub Actions smoke & CI, PR preview deploys |
| Observability | Prometheus metrics, structured logging (backend) |
| Rate Limiting | SlowAPI sliding window on backend |
Screenshots
Quick Start
Frontend Only (using hosted Gemini)
git clone https://github.com/knoksen/knoksPix.git
cd knoksPix
npm ci
echo "GEMINI_API_KEY=your_key" > .env.local
npm run dev
Full Stack (with local Starcoder2 backend)
cp backend/.env.sample backend/.env
# Edit backend/.env as needed (MODEL_ID, tokens, mock mode etc.)
docker compose up --build
# Frontend (separate terminal)
npm run dev
Backend API docs: http://localhost:8000/docs
Frontend
React + TypeScript + Vite. Core UI elements in components/. State is localized per panel keeping bundle size lean. Tests under tests/ use Jest + React Testing Library.
Build production bundle:
npm run build
Serve locally:
npm run preview
Backend API Service (Starcoder2)
Located in backend/. Provides:
POST /v1/chat/completions– OpenAI-style (stream or non-stream)POST /v1/generate– Simple prompt generation (stream or non-stream)GET /metrics– Prometheus metricsGET /healthz– Liveness check
Streaming uses Server-Sent Events (SSE). Chat endpoint emits OpenAI-compatible chat.completion.chunk objects. Generation endpoint emits { "text": "..." } chunks then [DONE] sentinel.
Mock Mode (no model download): set USE_MOCK_GENERATION=1.
Backend Setup
cd backend
python -m venv .venv && source .venv/bin/activate # Windows: .venv\\Scripts\\activate
pip install -r requirements.txt -r requirements-dev.txt
cp .env.sample .env # edit values
uvicorn main:app --reload --port 8000
Configuration Reference
| Variable | Purpose | Default |
|---|---|---|
MODEL_ID |
HF model id to load | bigcode/starcoder2-3b |
HF_TOKEN |
(Optional) auth for private models | empty |
STARCODER2_API_TOKEN |
Bearer token required by clients | changeme |
USE_MOCK_GENERATION |
Skip model load; return synthetic outputs | 0 |
MAX_NEW_TOKENS_LIMIT |
Hard upper bound user requests | 512 |
RATE_LIMIT |
slowapi rate expression | 100/minute |
LOG_LEVEL |
Logging threshold | INFO |
Streaming Protocol Details
Chat endpoint (/v1/chat/completions, stream=true):
data: {"id":"...","object":"chat.completion.chunk","choices":[{"delta":{"content":"def"}}]}
data: {"id":"...","object":"chat.completion.chunk","choices":[{"delta":{"content":" add"}}]}
data: {"id":"...","object":"chat.completion.chunk","choices":[{"delta":{}}],"finish_reason":"stop"}
data: [DONE]
Generate endpoint (/v1/generate, stream=true):
data: {"text":"partial token"}
data: {"text":" more"}
data: [DONE]
Both endpoints send text/event-stream; charset=utf-8 and can be consumed with any SSE client. Non‑stream mode aggregates full text in a single JSON object.
Observability
Metric names (Prometheus):
http_requests_total/ latency histograms (instrumentator defaults)- Custom counters (planned): token usage per request (placeholder if not yet present)
Dashboards: Point Grafana at the Prometheus service (see docker-compose.yml).
Performance Notes
- Use mock mode in CI to avoid multi‑GB model pulls.
- For GPU: extend Dockerfile build args (CUDA / ROCm) and launch with
--gpus=all(already hinted in compose). - Adjust
MAX_NEW_TOKENS_LIMITto guard latency & memory. - Consider swapping to a
TextIteratorStreamerfor finer token pacing (roadmap).
Example (chat streaming):
curl -N \
-H "Authorization: Bearer $STARCODER2_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"model": "bigcode/starcoder2-3b",
"messages": [{"role":"user","content":"Write a Python hello world"}],
"stream": true
}' \
http://localhost:8000/v1/chat/completions
Example (generate streaming):
curl -N \
-H "Authorization: Bearer $STARCODER2_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"prompt": "def add(a,b):\n return a+b",
"max_new_tokens": 64,
"stream": true
}' \
http://localhost:8000/v1/generate
Python Helper Client
from starcoder2_client import ChatClient
client = ChatClient(base_url="http://localhost:8000", token="changeme")
resp = client.chat([{ "role": "user", "content": "Write a Python function add(a,b)." }])
print(resp["choices"][0]["message"]["content"])
Deployment Options
Simplified here. Full extended matrices, one‑click buttons, hosting comparisons, and hardening tips moved to docs/DEPLOYMENT.md.
Common quick paths:
| Scenario | Command / Action |
|---|---|
| Local frontend only | npm run dev |
| Local full stack (mock) | (cd backend && USE_MOCK_GENERATION=1 uvicorn main:app --port 8000) & npm run dev |
| Docker backend | docker compose up --build |
| GitHub Pages deploy | Auto on push to main |
| PR preview | Auto Surge URL comment |
See docs/DEPLOYMENT.md for buttons & providers.
Badges:
Preview Deployments
- Each PR triggers a build + Surge deployment.
- A unique URL is commented automatically on the PR.
- Main branch builds to GitHub Pages (frontend) + can trigger backend container build (optional pipeline extension).
Environment Variables
Frontend (.env.local):
GEMINI_API_KEY=your_key
VITE_API_BASE=http://localhost:8000
Backend (backend/.env): (see backend/.env.sample for full list)
MODEL_ID=bigcode/starcoder2-3b
STARCODER2_API_TOKEN=changeme
USE_MOCK_GENERATION=1
RATE_LIMIT=100/minute
MAX_NEW_TOKENS_LIMIT=512
LOG_LEVEL=INFO
Testing
Run UI tests:
npm test
Backend tests (mock mode):
pytest
Architecture
flowchart LR
A[React/Vite Frontend] -->|Fetch / Chat| B((FastAPI Backend))
B -->|Generation| C[Starcoder2 Model]
B -->|Mock Mode| C2[(In-Memory Mock)]
B -->|/metrics| D[(Prometheus)]
D --> G[Grafana]
B -->|Structured Logs| L[(Log Aggregator)]
- Rate limiting: slowapi sliding window
- Streaming: SSE for both chat + generate
- Metrics: latency, request counts, token counter
- Multi-platform packaging: Electron + Capacitor
Security
- Secrets only via environment variables
- Bearer token auth on backend when token set
- Rate limiting enabled by default
- No secrets shipped in PR preview builds
Release Checklist
| Item | Status | Notes |
|---|---|---|
| README updated | ✅ | Current file |
| License present | ✅ | MIT license in repo |
| Env samples | ✅ | backend/.env.sample |
| CI smoke tests | ✅ | smoke.yml badge passing |
| PR preview pipeline | ✅ | Surge deployment configured |
| Backend health endpoint | ✅ | /healthz present |
| Metrics endpoint | ✅ | /metrics (Prometheus) |
| Rate limiting | ✅ | slowapi configured |
| Streaming verified | ✅ | SSE implemented both endpoints |
| Mock mode | ✅ | USE_MOCK_GENERATION=1 |
| Image assets | ⚠️ | Replace placeholder screenshots |
| Deployment buttons | ✅ | Added Netlify/Vercel/etc |
| Backend tests | ✅ | Mock mode tests present |
| Release workflow | ✅ | Tag push triggers Electron & dist build |
| Dependabot | ✅ | .github/dependabot.yml configured |
| CodeQL scan | ✅ | codeql.yml workflow added |
| SBOM generation | ✅ | sbom.yml workflow (CycloneDX) |
Run Locally (Frontend)
Prerequisites: Node.js 20 or higher
- Clone the repository:
bash
git clone https://github.com/knoksen/knoksPix.git
cd knoksPix
- Install dependencies:
bash
npm ci
- Set up environment variables:
- Create a
.env.localfile in the root directory -
Add your Gemini API key:
GEMINI_API_KEY=your_api_key_here -
Start the development server:
bash
npm run dev
- Open http://localhost:5173 in your browser
Local Backend + Frontend Together
- Start backend (mock for speed):
bash
(cd backend && USE_MOCK_GENERATION=1 uvicorn main:app --port 8000)
- Start frontend:
bash
npm run dev
- Set
VITE_API_BASE=http://localhost:8000in.env.localif calling backend.
Deployment
See Deployment Options for matrix. Production frontend is emitted to dist/.
License
MIT. See LICENSE.
Contributing
- Fork the repository
- Create a feature branch:
git checkout -b feat/awesome - Commit:
git commit -m 'feat: add awesome capability' - Push:
git push origin feat/awesome - Open a Pull Request – preview URL will be auto‑attached.
Conventional commit prefixes (feat:, fix:, docs:) encouraged. Small, focused PRs merge faster.