Skip to content

Commit 2c492e5

Browse files
Grammatical and Typographical improvements (mlc-ai#1139)
* Update faq.rst * Update guideline.rst * Update compile_models.rst * Update distribute_compiled_models.rst * Update get-vicuna-weight.rst * Update python.rst * Update android.rst * Update cli.rst * Update ios.rst * Update javascript.rst * Update python.rst * Update rest.rst
1 parent 24f795e commit 2c492e5

File tree

12 files changed

+61
-61
lines changed

12 files changed

+61
-61
lines changed

docs/community/faq.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ Frequently Asked Questions
55

66
This is a list of Frequently Asked Questions (FAQ) about the MLC-LLM. Feel free to suggest new entries!
77

8-
... How can I customize the temperature, repetition penalty of models?
8+
... How can I customize the temperature, and repetition penalty of models?
99
Please check our :doc:`/get_started/mlc_chat_config` tutorial.
1010

1111
... What's the quantization algorithm MLC-LLM using?

docs/community/guideline.rst

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -42,11 +42,11 @@ Ready to contribute to MLC-LLM? Awesome! We are excited to see you are ready to
4242
The standard way to make changes to MLC-LLM code base is through creating a `pull-request <https://github.com/mlc-ai/mlc-llm/pulls>`__,
4343
and we will review your code and merge it to the code base when it is ready.
4444

45-
The first step to become a developer is to `fork <https://github.com/mlc-ai/mlc-llm/fork>`__ the repository to your own
45+
The first step to becoming a developer is to `fork <https://github.com/mlc-ai/mlc-llm/fork>`__ the repository to your own
4646
github account, you will notice a repository under ``https://github.com/username/mlc-llm`` where ``username`` is your github user name.
4747

4848
You can clone your fork to your local machine and commit changes, or edit the contents of your fork (in the case you are just fixing typos)
49-
on github directly. Once your update is complete, you can click the ``contribute`` button and open a pull request to the main repository.
49+
on GitHub directly. Once your update is complete, you can click the ``contribute`` button and open a pull request to the main repository.
5050

5151
.. _contribute-new-models:
5252

@@ -86,14 +86,14 @@ Fo your convenience, you can use `clang-format <https://clang.llvm.org/docs/Clan
8686
General Development Process
8787
---------------------------
8888

89-
Everyone in the community is welcomed to send patches, documents, and propose new directions to the project.
90-
The key guideline here is to enable everyone in the community to get involved and participate the decision and development.
89+
Everyone in the community is welcome to send patches, documents, and propose new directions to the project.
90+
The key guideline here is to enable everyone in the community to get involved and participate in the decision and development.
9191
We encourage public discussion in different channels, so that everyone in the community can participate
9292
and get informed in developments.
9393

9494
Code reviews are one of the key ways to ensure the quality of the code. High-quality code reviews prevent technical debt
9595
for long-term and are crucial to the success of the project. A pull request needs to be reviewed before it gets merged.
96-
A committer who has the expertise of the corresponding area would moderate the pull request and the merge the code when
96+
A committer who has the expertise of the corresponding area would moderate the pull request and merge the code when
9797
it is ready. The corresponding committer could request multiple reviewers who are familiar with the area of the code.
9898
We encourage contributors to request code reviews themselves and help review each other's code -- remember everyone
9999
is volunteering their time to the community, high-quality code review itself costs as much as the actual code
@@ -108,18 +108,18 @@ moderate technical discussions in a diplomatic way, and provide suggestions with
108108
Committers
109109
^^^^^^^^^^
110110

111-
Committers are individuals who are granted the write access to the project. A committer is usually responsible for
111+
Committers are individuals who are granted with write access to the project. A committer is usually responsible for
112112
a certain area or several areas of the code where they oversee the code review process.
113113
The area of contribution can take all forms, including code contributions and code reviews, documents, education, and outreach.
114-
The review of pull requests will be assigned to the committers who recently contribute to the area this PR belong to.
115-
Committers are essential for a high quality and healthy project. The community actively look for new committers
114+
The review of pull requests will be assigned to the committers who recently contribute to the area this PR belongs to.
115+
Committers are essential for a high quality and healthy project. The community actively looks for new committers
116116
from contributors. Each existing committer can nominate new committers to MLC projects.
117117

118118
.. _roles-contributors:
119119

120120
Contributors
121121
^^^^^^^^^^^^
122122
We also welcome contributors if you are not ready to be a committer yet. Everyone who contributes to
123-
the project (in the form of code, bugfix, documentation, tutorials, etc) is a contributors.
123+
the project (in the form of code, bugfix, documentation, tutorials, etc) is a contributor.
124124
We maintain a `page <https://github.com/mlc-ai/mlc-llm/blob/main/CONTRIBUTORS.md>`__ to acknowledge contributors,
125-
please let us know if you contribute to the project and your name is not included in the list.
125+
please let us know if you contribute to the project and if your name is not included in the list.

docs/compilation/compile_models.rst

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -4,14 +4,14 @@ Compile Models via MLC
44
======================
55

66
This page describes how to compile a model with MLC LLM. Model compilation takes model inputs, produces quantized model weights,
7-
and optimized model lib for a given platform. It enables users to bring their own new model weights, try different quantization modes,
7+
and optimizes model lib for a given platform. It enables users to bring their own new model weights, try different quantization modes,
88
and customize the overall model optimization flow.
99

1010
.. note::
1111
Before you proceed, please make sure that you have :ref:`install-tvm-unity` correctly installed on your machine.
1212
TVM-Unity is the necessary foundation for us to compile models with MLC LLM.
1313
If you want to build webgpu, please also complete :ref:`install-web-build`.
14-
Please also follow the instruction in :ref:`deploy-cli` to obtain the CLI app that can be used to chat with the compiled model.
14+
Please also follow the instructions in :ref:`deploy-cli` to obtain the CLI app that can be used to chat with the compiled model.
1515
Finally, we strongly recommend you read :ref:`project-overview` first to get familiarized with the high-level terminologies.
1616

1717

@@ -25,7 +25,7 @@ Install MLC-LLM Package
2525
Work with Source Code
2626
^^^^^^^^^^^^^^^^^^^^^
2727

28-
The easiest way is to use MLC-LLM is to clone the repository, and compile models under the root directory of the repository.
28+
The easiest way to use MLC-LLM is to clone the repository, and compile models under the root directory of the repository.
2929

3030
.. code:: bash
3131
@@ -106,7 +106,7 @@ your personal computer.
106106
xcrun: error: unable to find utility "metallib", not a developer tool or in PATH
107107
108108
, please check and make sure you have Command Line Tools for Xcode installed correctly.
109-
You can use ``xcrun metal`` to validate: when it prints ``metal: error: no input files``, it means the Command Line Tools for Xcode is installed and can be found, and you can proceed the model compiling.
109+
You can use ``xcrun metal`` to validate: when it prints ``metal: error: no input files``, it means the Command Line Tools for Xcode is installed and can be found, and you can proceed with the model compiling.
110110

111111
.. group-tab:: Android
112112

@@ -172,7 +172,7 @@ We can check the output with the commands below:
172172
tokenizer_config.json
173173
174174
We now chat with the model using the command line interface (CLI) app.
175-
Follow the build from source instruction
175+
Follow the build from the source instruction
176176

177177
.. code:: shell
178178
@@ -271,7 +271,7 @@ We can check the output with the commands below:
271271
tokenizer_config.json
272272
273273
The model lib ``dist/RedPajama-INCITE-Chat-3B-v1-q4f16_1/RedPajama-INCITE-Chat-3B-v1-q4f16_1-webgpu.wasm``
274-
can be uploaded to internet. You can pass a ``model_lib_map`` field to WebLLM app config to use this library.
274+
can be uploaded to the internet. You can pass a ``model_lib_map`` field to WebLLM app config to use this library.
275275

276276

277277
Each compilation target produces a specific model library for the given platform. The model weight is shared across
@@ -311,7 +311,7 @@ In other cases you need to specify the model via ``--model``.
311311
- ``dist/models/MODEL_NAME_OR_PATH`` (e.g., ``--model Llama-2-7b-chat-hf``),
312312
- ``MODEL_NAME_OR_PATH`` (e.g., ``--model /my-model/Llama-2-7b-chat-hf``).
313313

314-
When running the compile command using ``--model``, please make sure you have placed the model to compile under ``dist/models/`` or other location on the disk.
314+
When running the compile command using ``--model``, please make sure you have placed the model to compile under ``dist/models/`` or another location on the disk.
315315

316316
--hf-path HUGGINGFACE_NAME The name of the model's Hugging Face repository.
317317
We will download the model to ``dist/models/HUGGINGFACE_NAME`` and load the model from this directory.
@@ -336,11 +336,11 @@ The following arguments are optional:
336336
we will use the maximum sequence length from the ``config.json`` in the model directory.
337337
--reuse-lib LIB_NAME Specifies the previously generated library to reuse.
338338
This is useful when building the same model architecture with different weights.
339-
You can refer to the :ref:`model distribution <distribute-model-step3-specify-model-lib>` page for detail of this argument.
339+
You can refer to the :ref:`model distribution <distribute-model-step3-specify-model-lib>` page for details of this argument.
340340
--use-cache When ``--use-cache=0`` is specified,
341341
the model compilation will not use cached file from previous builds,
342342
and will compile the model from the very start.
343-
Using cache can help reduce the time needed to compile.
343+
Using a cache can help reduce the time needed to compile.
344344
--debug-dump Specifies whether to dump debugging files during compilation.
345345
--use-safetensors Specifies whether to use ``.safetensors`` instead of the default ``.bin`` when loading in model weights.
346346

@@ -354,7 +354,7 @@ This section lists compile commands for more models that you can try out.
354354
.. tab:: Model: Llama-2-7B
355355

356356
Please `request for access <https://huggingface.co/meta-llama>`_ to the Llama-2 weights from Meta first.
357-
After granted the access, please create directory ``dist/models`` and download the model to the directory.
357+
After granted access, please create directory ``dist/models`` and download the model to the directory.
358358
For example, you can run the following code:
359359

360360
.. code:: shell

docs/compilation/distribute_compiled_models.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -67,7 +67,7 @@ You can **optionally** customize the chat config file
6767
``dist/RedPajama-INCITE-Instruct-3B-v1-q4f16_1/params/mlc-chat-config.json`` (checkout :ref:`configure-mlc-chat-json` for more detailed instructions).
6868
You can also simply use the default configuration and skip this step.
6969

70-
For demonstration purpose, we update ``mean_gen_len`` to 32 and ``max_gen_len`` to 64.
70+
For demonstration purposes, we update ``mean_gen_len`` to 32 and ``max_gen_len`` to 64.
7171
We also update ``conv_template`` to ``"LM"`` because the model is instruction-tuned.
7272

7373

@@ -160,7 +160,7 @@ Download the Distributed Models and Run in iOS App
160160
--------------------------------------------------
161161

162162
For iOS app, model libraries are statically packed into the app at the time of app building.
163-
Therefore, the iOS app supports running any models whose model libraries are integrated into the app.
163+
Therefore, the iOS app supports running any model whose model libraries are integrated into the app.
164164
You can check the :ref:`list of supported model libraries <using-prebuilt-models-ios>`.
165165

166166
To download and run the compiled RedPajama-3B instruct model on iPhone, we need to reuse the integrated ``RedPajama-INCITE-Chat-3B-v1-q4f16_1`` model library.
@@ -198,7 +198,7 @@ Now we can download the model weights in iOS app and run the model by following
198198

199199
.. tab:: Step 4
200200

201-
When the download is finished, click into the model and enjoy.
201+
When the download is finished, click on the model and enjoy.
202202

203203
.. image:: https://raw.githubusercontent.com/mlc-ai/web-data/main/images/mlc-llm/tutorials/iPhone-distribute-4.jpeg
204204
:align: center

docs/compilation/get-vicuna-weight.rst

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ Getting Vicuna Weights
55
:local:
66
:depth: 2
77

8-
`Vicuna <https://lmsys.org/blog/2023-03-30-vicuna/>`_ is a open-source chatbot trained by fine-tuning `LLaMA <https://ai.facebook.com/blog/large-language-model-llama-meta-ai/>`_ on `ShartGPT <https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered>`_ data.
8+
`Vicuna <https://lmsys.org/blog/2023-03-30-vicuna/>`_ is an open-source chatbot trained by fine-tuning `LLaMA <https://ai.facebook.com/blog/large-language-model-llama-meta-ai/>`_ on `ShartGPT <https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered>`_ data.
99

1010
Please note that the official Vicuna weights are delta weights applied to the LLaMA weights in order to comply with the LLaMA license. Users are responsible for applying these delta weights themselves.
1111

@@ -14,7 +14,7 @@ In this tutorial, we will show how to apply the delta weights to LLaMA weights t
1414
Install FastChat
1515
----------------
1616

17-
FastChat offers convenient utility functions for applying delta to LLaMA weights. You can easily install it using pip.
17+
FastChat offers convenient utility functions for applying the delta to LLaMA weights. You can easily install it using pip.
1818

1919
.. code-block:: bash
2020
@@ -38,14 +38,14 @@ Then download the weights (both the LLaMA weight and Vicuna delta weight):
3838
git clone https://huggingface.co/lmsys/vicuna-7b-delta-v1.1
3939
4040
41-
There is a name mis-alignment issue in the LLaMA weights and Vicuna delta weights.
41+
There is a name misalignment issue in the LLaMA weights and Vicuna delta weights.
4242
Please follow these steps to modify the content of the "config.json" file:
4343

4444
.. code-block:: bash
4545
4646
sed -i 's/LLaMAForCausalLM/LlamaForCausalLM/g' llama-7b-hf/config.json
4747
48-
Then use ``fschat`` to apply delta to LLaMA weights
48+
Then use ``fschat`` to apply the delta to LLaMA weights
4949

5050
.. code-block:: bash
5151

docs/compilation/python.rst

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -5,8 +5,8 @@ Python API for Model Compilation
55
:local:
66
:depth: 2
77

8-
We expose Python API for compiling/building model in the package :py:mod:`mlc_llm`, so
9-
that users may build model in any directory in their program (i.e. not just
8+
We expose Python API for compiling/building models in the package :py:mod:`mlc_llm`, so
9+
that users may build a model in any directory in their program (i.e. not just
1010
within the mlc-llm repo).
1111

1212
Install MLC-LLM as a Package
@@ -44,7 +44,7 @@ After installing the package, you can build the model using :meth:`mlc_llm.build
4444
which takes in an instance of :class:`BuildArgs` (a dataclass that represents
4545
the arguments for building a model).
4646

47-
For detailed instruction with code, please refer to `the python notebook
47+
For detailed instructions with code, please refer to `the Python notebook
4848
<https://github.com/mlc-ai/notebooks/blob/main/mlc-llm/tutorial_compile_llama2_with_mlc_llm.ipynb>`_
4949
(executable in Colab), where we walk you through compiling Llama-2 with :py:mod:`mlc_llm`
5050
in Python.
@@ -56,7 +56,7 @@ API Reference
5656

5757
In order to use the python API :meth:`mlc_llm.build_model`, users need to create
5858
an instance of the dataclass :class:`BuildArgs`. The corresponding arguments in
59-
command line shown in :ref:`compile-command-specification` are automatically
59+
the command line shown in :ref:`compile-command-specification` are automatically
6060
converted from the definition of :class:`BuildArgs` and are equivalent.
6161

6262
Then with an instantiated :class:`BuildArgs`, users can call the build API

0 commit comments

Comments
 (0)