Skip to content
This repository was archived by the owner on Jul 4, 2025. It is now read-only.

Commit 9dbe4e6

Browse files
authored
Merge pull request #158 from janhq/improvement/landing-page
web: nitro landing page
2 parents 5a71fb6 + 6a84f9b commit 9dbe4e6

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

61 files changed

+1357
-788
lines changed

docs/.gitignore

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,4 +18,5 @@
1818
npm-debug.log*
1919
yarn-debug.log*
2020
yarn-error.log*
21-
.env
21+
yarn.lock
22+
.env

docs/babel.config.js

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,3 @@
11
module.exports = {
2-
presets: [require.resolve('@docusaurus/core/lib/babel/preset')],
2+
presets: [require.resolve("@docusaurus/core/lib/babel/preset")],
33
};

docs/docs/features/chat.md

Lines changed: 7 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,7 @@ curl http://localhost:3928/v1/chat/completions \
2424
}'
2525

2626
```
27+
2728
</div>
2829

2930
<div style={{ width: '50%', float: 'right', clear: 'right' }}>
@@ -42,6 +43,7 @@ curl https://api.openai.com/v1/chat/completions \
4243
]
4344
}'
4445
```
46+
4547
</div>
4648

4749
This command sends a request to your local LLM, querying about the winner of the 2020 World Series.
@@ -77,6 +79,7 @@ curl http://localhost:3928/v1/chat/completions \
7779
}'
7880

7981
```
82+
8083
</div>
8184

8285
<div style={{ width: '50%', float: 'right', clear: 'right' }}>
@@ -106,6 +109,7 @@ curl https://api.openai.com/v1/chat/completions \
106109
]
107110
}'
108111
```
112+
109113
</div>
110114

111115
### Chat Completion Response
@@ -138,6 +142,7 @@ Below are examples of responses from both the Nitro server and OpenAI:
138142
}
139143
}
140144
```
145+
141146
</div>
142147

143148
<div style={{ width: '50%', float: 'right', clear: 'right' }}>
@@ -166,7 +171,7 @@ Below are examples of responses from both the Nitro server and OpenAI:
166171
}
167172
}
168173
```
169-
</div>
170174

175+
</div>
171176

172-
The chat completion feature in Nitro showcases compatibility with OpenAI, making the transition between using OpenAI and local AI models more straightforward. For further details and advanced usage, please refer to the [API reference](https://nitro.jan.ai/api).
177+
The chat completion feature in Nitro showcases compatibility with OpenAI, making the transition between using OpenAI and local AI models more straightforward. For further details and advanced usage, please refer to the [API reference](https://nitro.jan.ai/api-reference).

docs/docs/features/embed.md

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,7 @@ curl http://localhost:3928/v1/embeddings \
2626
}'
2727

2828
```
29+
2930
</div>
3031
<div style={{ width: '50%', float: 'right', clear: 'right' }}>
3132

@@ -39,6 +40,7 @@ curl https://api.openai.com/v1/embeddings \
3940
"encoding_format": "float"
4041
}'
4142
```
43+
4244
</div>
4345

4446
## Embedding Reponse
@@ -61,8 +63,9 @@ The example response used the output from model [llama2 Chat 7B Q5 (GGUF)](https
6163
"object": "embedding"
6264
}
6365
]
64-
}
66+
}
6567
```
68+
6669
</div>
6770

6871
<div style={{ width: '50%', float: 'right', clear: 'right' }}>
@@ -83,7 +86,7 @@ The example response used the output from model [llama2 Chat 7B Q5 (GGUF)](https
8386

8487

8588
```
86-
</div>
8789

90+
</div>
8891

89-
The embedding feature in Nitro demonstrates a high level of compatibility with OpenAI, simplifying the transition between using OpenAI and local AI models. For more detailed information and advanced use cases, refer to the comprehensive [API Reference]((https://nitro.jan.ai/api)).
92+
The embedding feature in Nitro demonstrates a high level of compatibility with OpenAI, simplifying the transition between using OpenAI and local AI models. For more detailed information and advanced use cases, refer to the comprehensive [API Reference](https://nitro.jan.ai/api-reference)).

docs/docs/new/about.md

Lines changed: 11 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -39,8 +39,9 @@ curl http://localhost:3928/v1/chat/completions \
3939
},
4040
]
4141
}'
42-
42+
4343
```
44+
4445
</div>
4546

4647
<div style={{ width: '50%', float: 'right', clear: 'right' }}>
@@ -63,6 +64,7 @@ curl https://api.openai.com/v1/chat/completions \
6364
]
6465
}'
6566
```
67+
6668
</div>
6769

6870
- **Extends OpenAI's API with helpful model methods:**
@@ -80,7 +82,7 @@ curl https://api.openai.com/v1/chat/completions \
8082
### Multi-modal Capabilities
8183

8284
- **Coming Soon**: Expansion to multi-modal functionalities - enabling Nitro to process and generate images, and audio.
83-
- **Features to Expect**:
85+
- **Features to Expect**:
8486
- Large Language-and-Vision Assistant.
8587
- Speech recognition and transcription.
8688

@@ -90,25 +92,30 @@ curl https://api.openai.com/v1/chat/completions \
9092
- **Detailed Specifications**: For an in-depth understanding of Nitro's internal workings, components, and design philosophy, refer to our [Architecture Specifications](architecture.md).
9193

9294
## Support
95+
9396
### GitHub Issue Tracking
97+
9498
- **Report Problems**: Encounter an issue with Nitro? File a [GitHub issue](https://github.com/janhq/nitro). Please include detailed error logs and steps to reproduce the problem.
9599

96100
### Discord Community
101+
97102
- **Join the Conversation**: Discuss Nitro development and seek peer support in our [#nitro-dev](https://discord.gg/FTk2MvZwJH) channel on Discord.
98103

99104
## Contributing
100105

101106
### How to Contribute
107+
102108
Nitro welcomes contributions in various forms, not just coding. Here are some ways you can get involved:
103109

104-
- **Understand Nitro**: Start with the [Getting Started](nitro/overview) guide. Found an issue or have a suggestion? [Open an issue](https://github.com/janhq/nitro/issues) to let us know.
110+
- **Understand Nitro**: Start with the [Getting Started](/new/quickstart) guide. Found an issue or have a suggestion? [Open an issue](https://github.com/janhq/nitro/issues) to let us know.
105111

106112
- **Feature Development**: Engage with community feature requests. Bring ideas to life by opening a [pull request](https://github.com/janhq/nitro/pulls) for features that interest you.
107113

108114
### Links
115+
109116
- [Nitro GitHub Repository](https://github.com/janhq/nitro)
110117

111118
## Acknowledgements
112119

113120
- [drogon](https://github.com/drogonframework/drogon): The fast C++ web framework
114-
- [llama.cpp](https://github.com/ggerganov/llama.cpp): Inference of LLaMA model in pure C/C++
121+
- [llama.cpp](https://github.com/ggerganov/llama.cpp): Inference of LLaMA model in pure C/C++

docs/docs/new/architecture.md

Lines changed: 1 addition & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,7 @@ title: Architecture
77
### Details element example
88

99
## Key Concepts
10+
1011
## Inference Server
1112

1213
An inference server is a type of server designed to process requests for running large language models and to return predictions. This server acts as the backbone for AI-powered applications, providing real-time execution of models to analyze data and make decisions.
@@ -24,15 +25,6 @@ Parallel processing involves executing multiple computations simultaneously. For
2425
Drogon is an HTTP application framework based on C++14/17, designed for its speed and simplicity. Utilizing a non-blocking I/O and event-driven architecture, Drogon manages HTTP requests efficiently for high-performance and scalable applications.
2526

2627
- **Event Loop**: Drogon uses an event loop to wait for and dispatch events or messages within a program. This allows for handling many tasks asynchronously, without relying on multi-threading.
27-
2828
- **Threads**: While the event loop allows for efficient task management, Drogon also employs threads to handle parallel operations. These "drogon threads" process multiple tasks concurrently.
29-
3029
- **Asynchronous Operations**: The framework supports non-blocking operations, permitting the server to continue processing other tasks while awaiting responses from databases or external services.
31-
3230
- **Scalability**: Drogon's architecture is built to scale, capable of managing numerous connections at once, suitable for applications with high traffic loads.
33-
34-
35-
36-
We should only have 1 document
37-
- [ ] Refactor system/architecture
38-
- [ ] Refactor system/key-concepts

docs/docs/new/build-source.md

Lines changed: 41 additions & 38 deletions
Original file line numberDiff line numberDiff line change
@@ -15,19 +15,21 @@ git clone --recurse https://github.com/janhq/nitro
1515
If you don't have git, you can download the source code as a file archive from [Nitro GitHub](https://github.com/janhq/nitro). Each [release](https://github.com/caddyserver/caddy/releases) also has source snapshots.
1616

1717
## Install Dependencies
18+
1819
Next, let's install the necessary dependencies.
1920

2021
- **On MacOS with Apple Silicon:**
21-
```bash
22-
./install_deps.sh
23-
```
22+
23+
```bash
24+
./install_deps.sh
25+
```
2426

2527
- **On Windows:**
2628

27-
```bash
28-
cmake -S ./nitro_deps -B ./build_deps/nitro_deps
29-
cmake --build ./build_deps/nitro_deps --config Release
30-
```
29+
```bash
30+
cmake -S ./nitro_deps -B ./build_deps/nitro_deps
31+
cmake --build ./build_deps/nitro_deps --config Release
32+
```
3133

3234
This creates a `build_deps` folder.
3335

@@ -37,66 +39,67 @@ Now, let's generate the build files.
3739

3840
- **On MacOS, Linux, and Windows:**
3941

40-
```bash
41-
mkdir build && cd build
42-
cmake ..
43-
```
42+
```bash
43+
mkdir build && cd build
44+
cmake ..
45+
```
4446

4547
- **On MacOS with Intel processors:**
4648

47-
```bash
48-
mkdir build && cd build
49-
cmake -DLLAMA_METAL=OFF ..
50-
```
49+
```bash
50+
mkdir build && cd build
51+
cmake -DLLAMA_METAL=OFF ..
52+
```
5153

5254
- **On Linux with CUDA:**
5355

54-
```bash
55-
mkdir build && cd build
56-
cmake -DLLAMA_CUBLAS=ON ..
57-
```
56+
```bash
57+
mkdir build && cd build
58+
cmake -DLLAMA_CUBLAS=ON ..
59+
```
5860

5961
## Build the Application
6062

6163
Time to build Nitro!
6264

6365
- **On MacOS:**
64-
65-
```bash
66-
make -j $(sysctl -n hw.physicalcpu)
67-
```
66+
67+
```bash
68+
make -j $(sysctl -n hw.physicalcpu)
69+
```
6870

6971
- **On Linux:**
7072

71-
```bash
72-
make -j $(%NUMBER_OF_PROCESSORS%)
73-
```
73+
```bash
74+
make -j $(%NUMBER_OF_PROCESSORS%)
75+
```
7476

7577
- **On Windows:**
7678

77-
```bash
78-
cmake --build . --config Release
79-
```
79+
```bash
80+
cmake --build . --config Release
81+
```
8082

8183
## Start process
8284

8385
Finally, let's start Nitro.
8486

8587
- **On MacOS and Linux:**
8688

87-
```bash
88-
./nitro
89-
```
89+
```bash
90+
./nitro
91+
```
9092

9193
- **On Windows:**
9294

93-
```bash
94-
cd Release
95-
copy ..\..\build_deps\_install\bin\zlib.dll .
96-
nitro.exe
97-
```
95+
```bash
96+
cd Release
97+
copy ..\..\build_deps\_install\bin\zlib.dll .
98+
nitro.exe
99+
```
98100

99101
To verify if the build was successful:
102+
100103
```bash
101104
curl http://localhost:3928/healthz
102-
```
105+
```

docs/docs/new/quickstart.md

Lines changed: 13 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,19 +1,24 @@
11
---
22
title: Quickstart
33
---
4+
45
## Step 1: Install Nitro
56

67
### For Linux and MacOS
8+
79
Open your terminal and enter the following command. This will download and install Nitro on your system.
8-
```bash
9-
curl -sfL https://raw.githubusercontent.com/janhq/nitro/main/install.sh -o /tmp/install.sh && chmod +x /tmp/install.sh && sudo bash /tmp/install.sh --gpu && rm /tmp/install.sh
10-
```
10+
11+
```bash
12+
curl -sfL https://raw.githubusercontent.com/janhq/nitro/main/install.sh -o /tmp/install.sh && chmod +x /tmp/install.sh && sudo bash /tmp/install.sh --gpu && rm /tmp/install.sh
13+
```
1114

1215
### For Windows
16+
1317
Open PowerShell and execute the following command. This will perform the same actions as for Linux and MacOS but is tailored for Windows.
14-
```bash
15-
powershell -Command "& { Invoke-WebRequest -Uri 'https://raw.githubusercontent.com/janhq/nitro/main/install.bat' -OutFile 'install.bat'; .\install.bat --gpu; Remove-Item -Path 'install.bat' }"
16-
```
18+
19+
```bash
20+
powershell -Command "& { Invoke-WebRequest -Uri 'https://raw.githubusercontent.com/janhq/nitro/main/install.bat' -OutFile 'install.bat'; .\install.bat --gpu; Remove-Item -Path 'install.bat' }"
21+
```
1722

1823
> **NOTE:**Installing Nitro will add new files and configurations to your system to enable it to run.
1924
@@ -24,6 +29,7 @@ For a manual installation process, see: [Install from Source](install.md)
2429
Next, we need to download a model. For this example, we'll use the [Llama2 7B chat model](https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGUF/tree/main).
2530

2631
- Create a `/model` and navigate into it:
32+
2733
```bash
2834
mkdir model && cd model
2935
wget -O llama-2-7b-model.gguf https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGUF/resolve/main/llama-2-7b-chat.Q5_K_M.gguf?download=true
@@ -78,4 +84,4 @@ curl http://localhost:3928/v1/chat/completions \
7884

7985
This command sends a request to Nitro, asking it about the 2020 World Series winner.
8086

81-
- As you can see, A key benefit of Nitro is its alignment with [OpenAI's API structure](https://platform.openai.com/docs/guides/text-generation?lang=curl). Its inference call syntax closely mirrors that of OpenAI's API, facilitating an easier shift for those accustomed to OpenAI's framework.
87+
- As you can see, A key benefit of Nitro is its alignment with [OpenAI's API structure](https://platform.openai.com/docs/guides/text-generation?lang=curl). Its inference call syntax closely mirrors that of OpenAI's API, facilitating an easier shift for those accustomed to OpenAI's framework.

0 commit comments

Comments
 (0)