Skip to content

Commit 98ebd28

Browse files
[Docs] Add mlc.ai/package to DEPENDENCY INSTALLATION group (mlc-ai#1055)
Co-authored-by: Junru Shao <[email protected]>
1 parent b9179cf commit 98ebd28

File tree

7 files changed

+142
-22
lines changed

7 files changed

+142
-22
lines changed

docs/community/faq.rst

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,5 +13,4 @@ This is a list of Frequently Asked Questions (FAQ) about the MLC-LLM. Feel free
1313

1414
... Why do I encounter an error ``free(): invalid pointer, Aborted (core dumped)`` at the end of model compilation?
1515
This happens if you compiled TVM-Unity from source and didn't hide LLVM symbols in cmake configurations.
16-
Please follow our instructions in :ref:`Building TVM Unity from Source <tvm-unity-build-from-source>` tutorial to compile TVM-Unity which hides LLVM symbols,
17-
or use our pre-builc MLC-AI pip wheels from `MLC Packages <https://mlc.ai/package/>`__.
16+
Please follow our instructions in :ref:`Building TVM Unity from Source <tvm-unity-build-from-source>` tutorial to compile TVM-Unity which hides LLVM symbols, or use our pre-built MLC-LLM :doc:`pip wheels <../install/mlc_llm>`.

docs/deploy/python.rst

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -11,8 +11,7 @@ We also provide a web demo based on `gradio <https://gradio.app/>`_ as an exampl
1111
Python API
1212
----------
1313

14-
The Python API is a part of the MLC-Chat package, which we have prepared pre-built pip wheels and you can install it by
15-
following the instructions in `<https://mlc.ai/package/>`_.
14+
The Python API is a part of the MLC-Chat package, which we have prepared pre-built pip wheels via the :doc:`installation page <../install/mlc_llm>`.
1615

1716
Verify Installation
1817
^^^^^^^^^^^^^^^^^^^

docs/deploy/rest.rst

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -11,8 +11,7 @@ for user to interact with MLC-Chat in their own programs.
1111
Install MLC-Chat Package
1212
------------------------
1313

14-
The REST API is a part of the MLC-Chat package, which we have prepared pre-built pip wheels and you can install it by
15-
following the instructions in `<https://mlc.ai/package/>`_.
14+
The REST API is a part of the MLC-Chat package, which we have prepared pre-built :doc:`pip wheels <../install/mlc_llm>`.
1615

1716
Verify Installation
1817
^^^^^^^^^^^^^^^^^^^

docs/index.rst

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -11,13 +11,13 @@ Getting Started
1111
---------------
1212

1313
To begin with, try out MLC LLM support for int4-quantized Llama2 7B.
14-
It is recommended to have at least 4.5GB of free VRAM to run it.
14+
It is recommended to have at least 6GB free VRAM to run it.
1515

1616
.. tabs::
1717

1818
.. tab:: Python
1919

20-
**Install MLC Chat**. `MLC Chat <https://mlc.ai/package/>`_ is available via pip.
20+
**Install MLC Chat Python**. :doc:`MLC LLM <install/mlc_llm>` is available via pip.
2121
It is always recommended to install it in an isolated conda virtual environment.
2222

2323
**Download pre-quantized weights**. The comamnds below download the int4-quantized Llama2-7B from HuggingFace:
@@ -209,6 +209,7 @@ It is recommended to have at least 4.5GB of free VRAM to run it.
209209
:hidden:
210210

211211
install/tvm.rst
212+
install/mlc_llm.rst
212213
install/conda.rst
213214
install/gpu.rst
214215
install/emcc.rst

docs/install/gpu.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -105,7 +105,7 @@ After installation, you can run ``vulkaninfo`` in command line and see if you ca
105105
Vulkan SDK
106106
----------
107107

108-
Vulkan SDK is required for compiling models to Vulkan backend. To build TVM Unity compiler from source, you will need to install Vulkan SDK as a dependency, but our `pre-built wheels <https://mlc.ai/package>`__ already ships with Vulkan SDK.
108+
Vulkan SDK is required for compiling models to Vulkan backend. To build TVM Unity compiler from source, you will need to install Vulkan SDK as a dependency, but our :doc:`pre-built wheels <../install/mlc_llm>` already ships with Vulkan SDK.
109109

110110
Check Vulkan SDK installation guide according to your platform:
111111

docs/install/mlc_llm.rst

Lines changed: 133 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,133 @@
1+
.. _install-mlc-packages:
2+
3+
Install MLC LLM Python Package
4+
==============================
5+
6+
.. contents:: Table of Contents
7+
:local:
8+
:depth: 2
9+
10+
MLC LLM Python Package can be installed directly from a prebuilt developer package, or built from source.
11+
12+
Option 1. Prebuilt Package
13+
--------------------------
14+
15+
We provide nightly built pip wheels for MLC-LLM via pip.
16+
Select your operating system/compute platform and run the command in your terminal:
17+
18+
.. note::
19+
❗ Whenever using Python, it is highly recommended to use **conda** to manage an isolated Python environment to avoid missing dependencies, incompatible versions, and package conflicts.
20+
21+
.. tabs::
22+
23+
.. tab:: Linux
24+
25+
.. tabs::
26+
27+
.. tab:: CPU
28+
29+
.. code-block:: bash
30+
31+
conda activate your-environment
32+
python3 -m pip install --pre --force-reinstall -f https://mlc.ai/wheels mlc-chat-nightly mlc-ai-nightly
33+
34+
.. tab:: CUDA 11.7
35+
36+
.. code-block:: bash
37+
38+
conda activate your-environment
39+
python3 -m pip install --pre --force-reinstall -f https://mlc.ai/wheels mlc-chat-nightly-cu117 mlc-ai-nightly-cu117
40+
41+
.. tab:: CUDA 11.8
42+
43+
.. code-block:: bash
44+
45+
conda activate your-environment
46+
python3 -m pip install --pre --force-reinstall -f https://mlc.ai/wheels mlc-chat-nightly-cu118 mlc-ai-nightly-cu118
47+
48+
.. tab:: CUDA 12.1
49+
50+
.. code-block:: bash
51+
52+
conda activate your-environment
53+
python3 -m pip install --pre --force-reinstall -f https://mlc.ai/wheels mlc-chat-nightly-cu121 mlc-ai-nightly-cu121
54+
55+
.. tab:: CUDA 12.2
56+
57+
.. code-block:: bash
58+
59+
conda activate your-environment
60+
python3 -m pip install --pre --force-reinstall -f https://mlc.ai/wheels mlc-chat-nightly-cu122 mlc-ai-nightly-cu122
61+
62+
.. tab:: ROCm 5.6
63+
64+
.. code-block:: bash
65+
66+
conda activate your-environment
67+
python3 -m pip install --pre --force-reinstall -f https://mlc.ai/wheels mlc-chat-nightly-rocm56 mlc-ai-nightly-rocm56
68+
69+
.. tab:: ROCm 5.7
70+
71+
.. code-block:: bash
72+
73+
conda activate your-environment
74+
python3 -m pip install --pre --force-reinstall -f https://mlc.ai/wheels mlc-chat-nightly-rocm57 mlc-ai-nightly-rocm57
75+
76+
.. tab:: Vulkan
77+
78+
Supported in all Linux packages.
79+
80+
.. note::
81+
82+
If encountering issues with GLIBC not found, please install the latest glibc in conda:
83+
84+
.. code-block:: bash
85+
86+
conda install -c conda-forge libgcc-ng
87+
88+
.. tab:: macOS
89+
90+
.. tabs::
91+
92+
.. tab:: CPU + Metal
93+
94+
.. code-block:: bash
95+
96+
conda activate your-environment
97+
python3 -m pip install --pre --force-reinstall -f https://mlc.ai/wheels mlc-chat-nightly mlc-ai-nightly
98+
99+
.. note::
100+
101+
Always check if conda is installed properly in macOS using the command below:
102+
103+
.. code-block:: bash
104+
105+
conda info | grep platform
106+
107+
It should return "osx-64" for Mac with Intel chip, and "osx-arm64" for Mac with Apple chip.
108+
109+
.. tab:: Windows
110+
111+
.. tabs::
112+
113+
.. tab:: CPU + Vulkan
114+
115+
.. code-block:: bash
116+
117+
conda activate your-environment
118+
python3 -m pip install --pre --force-reinstall -f https://mlc.ai/wheels mlc-chat-nightly mlc-ai-nightly
119+
120+
.. note::
121+
If encountering the error below:
122+
123+
.. code-block:: bash
124+
125+
FileNotFoundError: Could not find module 'path\to\site-packages\tvm\tvm.dll' (or one of its dependencies). Try using the full path with constructor syntax.
126+
127+
It is likely `zstd`, a dependency to LLVM, was missing. Please `download <https://github.com/facebook/zstd/releases/tag/v1.5.5>`__ the precompiled binary, rename it to `zstd.dll` and copy to the same folder as `tvm.dll`.
128+
129+
130+
Option 2. Build from Source
131+
---------------------------
132+
133+
Upcoming.

docs/install/tvm.rst

Lines changed: 2 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -97,14 +97,7 @@ A nightly prebuilt Python package of Apache TVM Unity is provided.
9797

9898
.. tabs::
9999

100-
.. tab:: CPU
101-
102-
.. code-block:: bash
103-
104-
conda activate your-environment
105-
python3 -m pip install --pre --force-reinstall -f https://mlc.ai/wheels mlc-ai-nightly
106-
107-
.. tab:: Metal
100+
.. tab:: CPU + Metal
108101

109102
.. code-block:: bash
110103
@@ -125,17 +118,13 @@ A nightly prebuilt Python package of Apache TVM Unity is provided.
125118

126119
.. tabs::
127120

128-
.. tab:: CPU
121+
.. tab:: CPU + Vulkan
129122

130123
.. code-block:: bash
131124
132125
conda activate your-environment
133126
python3 -m pip install --pre --force-reinstall -f https://mlc.ai/wheels mlc-ai-nightly
134127
135-
.. tab:: Vulkan
136-
137-
Supported in all Windows packages.
138-
139128
.. note::
140129
If encountering the error below:
141130

0 commit comments

Comments
 (0)