Skip to content

Commit ca6603d

Browse files
authored
Merge pull request #1185 from ipfs/feat/renaming-go-ipfs
refactor: rename `go-ipfs` to `kubo` See ipfs/kubo#8959
2 parents a5e30e5 + 593dbc5 commit ca6603d

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

68 files changed

+6186
-5536
lines changed

.github/actions/latest-ipfs-tag/action.yml

Lines changed: 0 additions & 7 deletions
This file was deleted.
Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
name: 'Find latest Kubo tag'
2+
outputs:
3+
latest_tag:
4+
description: "latest Kubo tag name"
5+
runs:
6+
using: 'docker'
7+
image: 'Dockerfile'

.github/actions/latest-ipfs-tag/entrypoint.sh renamed to .github/actions/latest-kubo-tag/entrypoint.sh

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -2,16 +2,16 @@
22
set -eu
33

44
# extract tag name from latest stable release
5-
REPO="ipfs/go-ipfs"
6-
LATEST_IPFS_TAG=$(curl -H "Accept: application/vnd.github.v3+json" "https://api.github.com/repos/${REPO}/releases/latest" | jq --raw-output ".tag_name")
5+
REPO="ipfs/kubo"
6+
LATEST_IPFS_TAG=$(curl -L -H "Accept: application/vnd.github.v3+json" "https://api.github.com/repos/${REPO}/releases/latest" | jq --raw-output ".tag_name")
77

88
# extract IPFS release
99
cd /tmp
1010
git clone "https://github.com/$REPO.git"
11-
cd go-ipfs
11+
cd kubo
1212

1313
# confirm tag is valid
1414
git describe --tags "${LATEST_IPFS_TAG}"
1515

16-
echo "The latest IPFS tag is ${LATEST_IPFS_TAG}"
16+
echo "The latest Kubo tag is ${LATEST_IPFS_TAG}"
1717
echo "::set-output name=latest_tag::${LATEST_IPFS_TAG}"

.github/actions/update-with-latest-versions/action.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
name: 'Update when a new tag or a new release is available'
22
inputs:
33
latest_ipfs_tag:
4-
description: "latest go ipfs tag"
4+
description: "latest Kubo tag"
55
required: true
66
outputs:
77
updated_branch:

.github/actions/update-with-latest-versions/entrypoint.sh

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -4,40 +4,40 @@ set -eu
44
BRANCH=bump-documentation-to-latest-versions
55
LATEST_IPFS_TAG=$INPUT_LATEST_IPFS_TAG
66

7-
echo "The latest IPFS tag is ${LATEST_IPFS_TAG}"
7+
echo "The latest Kubo tag is ${LATEST_IPFS_TAG}"
88

99
ROOT=`pwd`
1010
git checkout -b ${BRANCH}
11-
API_FILE=`pwd`/docs/reference/http/api.md
11+
API_FILE="$(pwd)/docs/reference/kubo/rpc.md"
1212

1313

1414
# Update http api docs and cli docs
1515

1616
cd tools/http-api-docs
1717

18-
# extract go-ipfs release tag used in http-api-docs from go.mod in this repo
19-
CURRENT_IPFS_TAG=`grep 'github.com/ipfs/go-ipfs ' ./go.mod | awk '{print $2}'`
20-
echo "The currently used go-ipfs tag in http-api-docs is ${CURRENT_IPFS_TAG}"
18+
# extract kubo release tag used in http-api-docs from go.mod in this repo
19+
CURRENT_IPFS_TAG=$(grep 'github.com/ipfs/kubo ' ./go.mod | awk '{print $2}')
20+
echo "The currently used Kubo tag in http-api-docs is ${CURRENT_IPFS_TAG}"
2121

22-
# make the upgrade, if newer go-ipfs tags exist
22+
# make the upgrade, if newer Kubo tags exist
2323
if [ "$CURRENT_IPFS_TAG" = "$LATEST_IPFS_TAG" ]; then
24-
echo "http-api-docs already uses the latest go-ipfs tag."
24+
echo "http-api-docs already uses the latest Kubo tag."
2525
else
2626
# update http-api-docs
27-
sed "s/^\s*github.com\/ipfs\/go-ipfs\s\+$CURRENT_IPFS_TAG\s*$/ github.com\/ipfs\/go-ipfs $LATEST_IPFS_TAG/" go.mod > go.mod2
27+
sed "s/^\s*github.com\/ipfs\/kubo\s\+$CURRENT_IPFS_TAG\s*$/ github.com\/ipfs\/kubo $LATEST_IPFS_TAG/" go.mod > go.mod2
2828
mv go.mod2 go.mod
2929
go mod tidy
3030
make
3131
http-api-docs > "$API_FILE"
3232

3333
# update cli docs
3434
cd "$ROOT" # go back to root of ipfs-docs repo
35-
git clone https://github.com/ipfs/go-ipfs.git
36-
cd go-ipfs
35+
git clone https://github.com/ipfs/kubo.git
36+
cd kubo
3737
git fetch --all --tags
3838
git checkout "tags/$LATEST_IPFS_TAG"
3939
go install ./cmd/ipfs
40-
cd "$ROOT/docs/reference"
40+
cd "$ROOT/docs/reference/kubo"
4141
./generate-cli-docs.sh
4242
fi
4343

@@ -64,7 +64,7 @@ update_version() {
6464
cd "${ROOT}"
6565
update_version ipfs/ipfs-update current-ipfs-updater-version
6666
update_version ipfs-cluster/ipfs-cluster current-ipfs-cluster-version
67-
update_version ipfs/go-ipfs current-ipfs-version
67+
update_version ipfs/kubo current-ipfs-version
6868

6969

7070
# Push on change

.github/workflows/update-on-new-ipfs-tag.yml

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -11,9 +11,9 @@ jobs:
1111
steps:
1212
- name: Checkout ipfs-docs
1313
uses: actions/checkout@v2
14-
- name: Find latest go-ipfs tag
14+
- name: Find latest kubo tag
1515
id: latest_ipfs
16-
uses: ./.github/actions/latest-ipfs-tag
16+
uses: ./.github/actions/latest-kubo-tag
1717
- name: Update docs
1818
id: update
1919
uses: ./.github/actions/update-with-latest-versions
@@ -26,7 +26,7 @@ jobs:
2626
github_token: ${{ secrets.GITHUB_TOKEN }}
2727
source_branch: ${{ steps.update.outputs.updated_branch }}
2828
destination_branch: "main"
29-
pr_title: "Update documentation ${{ steps.latest_ipfs.outputs.latest_tag }}"
30-
pr_body: "Release Notes: https://github.com/ipfs/go-ipfs/releases/${{ steps.latest_ipfs.outputs.latest_tag }}"
29+
pr_title: "Update release version numbers"
30+
pr_body: "This PR was opened from update-on-new-ipfs-tag.yml workflow."
3131
pr_label: "needs/triage,P0"
3232

CONTRIBUTING.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -64,9 +64,9 @@ Write everything in using the [GitHub Flavored Markdown](https://github.github.c
6464
6565
### Project specific titles
6666

67-
When referring to projects by name, use proper noun capitalization: Go-IPFS and JS-IPFS.
67+
When referring to projects by name, use proper noun capitalization: Kubo (GO-IPFS) and JS-IPFS.
6868

69-
Cases inside code blocks refer to commands and are not capitalized: `go-ipfs` or `js-ipfs`.
69+
Cases inside code blocks refer to commands and are not capitalized: `kubo` (`go-ipfs`) or `js-ipfs`.
7070

7171
### Style and tone
7272

docs/.vuepress/config.js

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -263,10 +263,11 @@ module.exports = {
263263
title: 'API & CLI',
264264
path: '/reference/',
265265
children: [
266-
'/reference/go/api',
266+
'/reference/http/gateway',
267267
'/reference/js/api',
268-
'/reference/http/api',
269-
'/reference/cli'
268+
'/reference/go/api',
269+
'/reference/kubo/cli',
270+
'/reference/kubo/rpc'
270271
]
271272
},
272273
{

docs/.vuepress/redirects

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,10 @@
5151
/recent-releases/go-ipfs-0-7/install/ /install/recent-releases
5252
/recent-releases/go-ipfs-0-7/update-procedure/ /install/recent-releases
5353
/reference/api/ /reference
54-
/reference/api/cli/ /reference/cli
54+
/reference/api/cli/ /reference/kubo/cli
55+
/reference/cli/ /reference/kubo/cli
56+
/reference/kubo/ /reference
57+
/reference/http/ /reference/http/api
5558
/reference/api/http/ /reference/http/api
5659
/reference/go/overview/ /reference/go/api
5760
/reference/js/overview/ /reference/js/api

docs/.vuepress/theme/components/Page.vue

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -54,10 +54,21 @@ export default {
5454
return root.scrollHeight < 15000
5555
? root.classList.add('smooth-scroll')
5656
: root.classList.remove('smooth-scroll')
57+
},
58+
advancedRedirect: async function () {
59+
// Advanced redirect that is aware of URL #hash
60+
const url = window.location.href
61+
// https://github.com/ipfs/ipfs-docs/pull/1185
62+
if (url.includes('/reference/http/api')) {
63+
if (window.location.hash.startsWith('#api-v0')) {
64+
window.location.replace(url.replace('/reference/http/api','/reference/kubo/rpc'))
65+
}
66+
}
5767
}
5868
},
5969
mounted: function () {
6070
this.smoothScroll()
71+
this.advancedRedirect()
6172
},
6273
updated: function () {
6374
this.smoothScroll()

docs/basics/command-line.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ This will output something like:
4242

4343
```plaintext
4444
Initializing daemon...
45-
go-ipfs version: 0.12.0
45+
Kubo version: 0.12.0
4646
Repo version: 12
4747
System version: arm64/darwin
4848
[...]

docs/community/contribute/grammar-formatting-and-style.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -36,9 +36,9 @@ If you have to use an acronym, spell the full phrase first and include the acron
3636
3737
### Project specific titles
3838

39-
When referring to projects by name, use proper noun capitalization: Go-IPFS and JS-IPFS.
39+
When referring to projects by name, use proper noun capitalization: Kubo and JS-IPFS.
4040

41-
Cases inside code blocks refer to commands and are not capitalized: `go-ipfs` or `js-ipfs`.
41+
Cases inside code blocks refer to commands and are not capitalized: `kubo` or `js-ipfs`.
4242

4343
### _Using_ IPFS, not _on_ IPFS
4444

docs/community/contribute/ways-to-contribute.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ IPFS and its sister-projects are big, with lots of code written in multiple lang
1414

1515
The biggest and most active repositories we have today are:
1616

17-
- [ipfs/go-ipfs](https://github.com/ipfs/go-ipfs)
17+
- [ipfs/kubo](https://github.com/ipfs/kubo)
1818
- [ipfs/js-ipfs](https://github.com/ipfs/js-ipfs)
1919
- [libp2p/go-libp2p](https://github.com/libp2p/go-libp2p)
2020
- [libp2p/js-libp2p](https://github.com/libp2p/js-libp2p)

docs/concepts/case-study-arbol.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -80,21 +80,21 @@ Arbol's end users enjoy the "it just works" benefits of parametric protection, b
8080

8181
4. **Compression:** This step is the final one before data is imported to IPFS. Arbol compresses each file to save on disk space and reduce sync time.
8282

83-
5. **Hashing:** Arbol uses the stock IPFS recursive add operation ([`ipfs add -r`](./reference/cli/#ipfs-add)) for hashing, as well as the experimental `no-copy` feature. This feature cuts down on disk space used by the hashing node, especially on the initial build of the dataset. Without it, an entire dataset would be copied into the local IPFS datastore directory. This can create problems, since the default flat file system datastore (`flatfs`) can start to run out of index nodes (the software representation of disk locations) after a few million files, leading to hashing failure. Arbol is also experimenting with [Badger](https://github.com/ipfs/go-ipfs/releases/tag/v0.5.0), an alternative to flat file storage, in collaboration with the IPFS core team as the core team considers incorporating this change into IPFS itself.
83+
5. **Hashing:** Arbol uses the stock IPFS recursive add operation ([`ipfs add -r`](./reference/kubo/cli/#ipfs-add)) for hashing, as well as the experimental `no-copy` feature. This feature cuts down on disk space used by the hashing node, especially on the initial build of the dataset. Without it, an entire dataset would be copied into the local IPFS datastore directory. This can create problems, since the default flat file system datastore (`flatfs`) can start to run out of index nodes (the software representation of disk locations) after a few million files, leading to hashing failure. Arbol is also experimenting with [Badger](https://github.com/ipfs/kubo/releases/tag/v0.5.0), an alternative to flat file storage, in collaboration with the IPFS core team as the core team considers incorporating this change into IPFS itself.
8484

8585
6. **Verification:** To ensure no errors were introduced to files during the parsing stage, queries are made to the source data files and compared against the results of an identical query made to the parsed, hashed data.
8686

8787
7. **Publishing:** Once a hash has been verified, it is posted to Arbol's master heads reference file, and is at this point accessible via Arbol's gateway and available for use in contracts.
8888

89-
8. **Pinning and syncing:** When storage nodes in the Arbol network detect that a new hash has been added to the heads file, they run the standard, recursive [`ipfs pin -r`](./reference/cli.md#ipfs-pin) command on it. Arbol's primary active nodes don't need to be large in number: The network includes a single [gateway node](ipfs-gateway.md) that bootstraps with all the parsing/hashing nodes, and a few large storage nodes that serve as the primary data storage backup. However, data is also regularly synced with "cold nodes" — archival storage nodes that are mostly kept offline — as well as on individual IPFS nodes on Arbol's developers' and agronomists' personal computers.
89+
8. **Pinning and syncing:** When storage nodes in the Arbol network detect that a new hash has been added to the heads file, they run the standard, recursive [`ipfs pin -r`](./reference/kubo/cli.md#ipfs-pin) command on it. Arbol's primary active nodes don't need to be large in number: The network includes a single [gateway node](ipfs-gateway.md) that bootstraps with all the parsing/hashing nodes, and a few large storage nodes that serve as the primary data storage backup. However, data is also regularly synced with "cold nodes" — archival storage nodes that are mostly kept offline — as well as on individual IPFS nodes on Arbol's developers' and agronomists' personal computers.
9090

9191
9. **Garbage collection:** Some older Arbol datasets require [garbage collection](glossary.md#garbage-collection) whenever new data is added, due to a legacy method of overwriting old hashes with new hashes. However, all of Arbol's newer datasets use an architecture where old hashes are preserved and new posts reference the previous post. This methodology creates a linked list of hashes, with each hash containing a reference to the previous hash. As the length of the list becomes computationally burdensome, the system consolidates intermediate nodes and adds a new route to the head, creating a [DAG (directed acyclic graph)](merkle-dag.md) structure. Heads are always stored in a master [heads.json reference file](https://gateway.arbolmarket.com/climate/hashes/heads.json) located on Arbol's command server.
9292

9393
### The tooling
9494

9595
![Arbol high-level architecture](./images/case-studies/img-arbol-arch.svg)
9696

97-
In addition to out-of-the-box [`go-ipfs`](https://github.com/ipfs/go-ipfs), Arbol relies heavily on custom written libraries and a number of weather-specialized Python libraries such as [netCDF4](https://pypi.org/project/netCDF4/) (an interface to netCDF, a self-describing format for array-oriented data) and [rasterio](https://pypi.org/project/rasterio) (for geospatial raster data). Additionally, Docker and Digital Ocean are important tools in Arbol's box for continuous integration and deployment.
97+
In addition to out-of-the-box [`kubo`](https://github.com/ipfs/kubo), Arbol relies heavily on custom written libraries and a number of weather-specialized Python libraries such as [netCDF4](https://pypi.org/project/netCDF4/) (an interface to netCDF, a self-describing format for array-oriented data) and [rasterio](https://pypi.org/project/rasterio) (for geospatial raster data). Additionally, Docker and Digital Ocean are important tools in Arbol's box for continuous integration and deployment.
9898

9999
As described above, Arbol datasets are ingested and augmented via either push or pull. For pulling data, Arbol uses a command server to query dataset release pages for new content. When new data is found, the command server spins up a Digital Ocean droplet (a Linux-based virtual machine) and deploys a "parse-interpret-compress-hash-verify" Docker container to it. This is done using a custom-built library that Arbol describes as "homebrew Lambda." Because Amazon's Lambda serverless compute has disk storage, CPU, and RAM limitations that make it unsuitable for the scale and complexity of Arbol's pipeline, the team has created their own tool.
100100

docs/concepts/case-study-audius.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -94,15 +94,15 @@ IPFS has provided Audius the full benefits of decentralized storage with no hass
9494

9595
## How Audius uses IPFS
9696

97-
All files and metadata on Audius are _shared_ using IPFS by creator node services, _registered_ on Audius smart contracts, _indexed_ by discovery services, and _served_ through the client to end users. Audius runs nodes internally to test new changes, and there are a dozen public hosts running nodes for specific services and geographies. However, content creators and listeners don’t need to know anything about the back end; they use the Audius client and client libraries to upload and stream audio. Each IPFS node within the Audius network is currently a [`go-ipfs`](https://github.com/ipfs/go-ipfs) container co-located with service logic. Audius implements the services interface with `go-ipfs` using [`py-ipfs-api`](https://github.com/ipfs-shipyard/py-ipfs-http-client) or [`ipfs-http-client`](https://github.com/ipfs/js-ipfs/tree/master/packages/ipfs-http-client) (JavaScript) to perform read and write operations.
97+
All files and metadata on Audius are _shared_ using IPFS by creator node services, _registered_ on Audius smart contracts, _indexed_ by discovery services, and _served_ through the client to end users. Audius runs nodes internally to test new changes, and there are a dozen public hosts running nodes for specific services and geographies. However, content creators and listeners don’t need to know anything about the back end; they use the Audius client and client libraries to upload and stream audio. Each IPFS node within the Audius network is currently a [`kubo`](https://github.com/ipfs/kubo) container co-located with service logic. Audius implements the services interface with `kubo` using [`py-ipfs-api`](https://github.com/ipfs-shipyard/py-ipfs-http-client) or [`ipfs-http-client`](https://github.com/ipfs/js-ipfs/tree/master/packages/ipfs-http-client) (JavaScript) to perform read and write operations.
9898

9999
### The tooling
100100

101101
Audius uses the following IPFS implementations with no modification:
102102

103103
- **IPFS core**
104-
- [`go-ipfs`](https://github.com/ipfs/go-ipfs)
105-
- _All individual nodes are `go-ipfs` containers_
104+
- [`kubo`](https://github.com/ipfs/kubo)
105+
- _All individual nodes are `kubo` containers_
106106
- [`py-ipfs-api`](https://github.com/ipfs-shipyard/py-ipfs-http-client)
107107
- _Discovery provider is a Python application_
108108
- _Python application uses a Flask server + Celery worker queue + PostgreSQL database_

0 commit comments

Comments
 (0)