diff --git a/locale/en/blog/module/multi-server-continuous-deployment-with-fleet.md b/locale/en/blog/module/multi-server-continuous-deployment-with-fleet.md index 8daf525ef7213..9512129d7c311 100644 --- a/locale/en/blog/module/multi-server-continuous-deployment-with-fleet.md +++ b/locale/en/blog/module/multi-server-continuous-deployment-with-fleet.md @@ -8,51 +8,64 @@ slug: multi-server-continuous-deployment-with-fleet layout: blog-post.hbs --- -

substackThis is a guest post by James "SubStack" Halliday, originally posted on his blog, and reposted here with permission.

+_This is a guest post by James "SubStack" Halliday, originally posted [on his blog](http://substack.net/posts/16a9d8/multi-server-continuous-deployment-with-fleet), and reposted here with permission._ -

Writing applications as a sequence of tiny services that all talk to each other over the network has many upsides, but it can be annoyingly tedious to get all the subsystems up and running.

+Writing applications as a sequence of tiny services that all talk to each other over the network has many upsides, but it can be annoyingly tedious to get all the subsystems up and running. -

Running a seaport can help with getting all the services to talk to each other, but running the processes is another matter, especially when you have new code to push into production.

+Running a [seaport](http://substack.net/posts/7a1c42) can help with getting all the services to talk to each other, but running the processes is another matter, especially when you have new code to push into production. -

fleet aims to make it really easy for anyone on your team to push new code from git to an armada of servers and manage all the processes in your stack.

+[fleet](http://github.com/substack/fleet) aims to make it really easy for anyone on your team to push new code from git to an armada of servers and manage all the processes in your stack. -

To start using fleet, just install the fleet command with npm:

+To start using fleet, just install the fleet command with [npm](https://npmjs.com): -
npm install -g fleet 
+``` +npm install -g fleet +``` -

Then on one of your servers, start a fleet hub. From a fresh directory, give it a passphrase and a port to listen on:

+Then on one of your servers, start a fleet hub. From a fresh directory, give it a passphrase and a port to listen on: -
fleet hub --port=7000 --secret=beepboop 
+``` +fleet hub --port=7000 --secret=beepboop +``` -

Now fleet is listening on :7000 for commands and has started a git server on :7001 over http. There's no ssh keys or post commit hooks to configure, just run that command and you're ready to go!

+Now fleet is listening on :7000 for commands and has started a git server on :7001 over http. There's no ssh keys or post commit hooks to configure, just run that command and you're ready to go! -

Next set up some worker drones to run your processes. You can have as many workers as you like on a single server but each worker should be run from a separate directory. Just do:

+Next set up some worker drones to run your processes. You can have as many workers as you like on a single server but each worker should be run from a separate directory. Just do: -
fleet drone --hub=x.x.x.x:7000 --secret=beepboop 
+``` +fleet drone --hub=x.x.x.x:7000 --secret=beepboop +``` -

where x.x.x.x is the address where the fleet hub is running. Spin up a few of these drones.

+where `x.x.x.x` is the address where the fleet hub is running. Spin up a few of these drones. -

Now navigate to the directory of the app you want to deploy. First set a remote so you don't need to type --hub and --secret all the time.

+Now navigate to the directory of the app you want to deploy. First set a remote so you don't need to type `--hub` and `--secret` all the time. -
fleet remote add default --hub=x.x.x.x:7000 --secret=beepboop 
+``` +fleet remote add default --hub=x.x.x.x:7000 --secret=beepboop +``` -

Fleet just created a fleet.json file for you to save your settings.

+Fleet just created a `fleet.json` file for you to save your settings. -

From the same app directory, to deploy your code just do:

+From the same app directory, to deploy your code just do: -
fleet deploy 
+``` +fleet deploy +``` -

The deploy command does a git push to the fleet hub's git http server and then the hub instructs all the drones to pull from it. Your code gets checked out into a new directory on all the fleet drones every time you deploy.

+The deploy command does a `git push` to the fleet hub's git http server and then the hub instructs all the drones to pull from it. Your code gets checked out into a new directory on all the fleet drones every time you deploy. -

Because fleet is designed specifically for managing applications with lots of tiny services, the deploy command isn't tied to running any processes. Starting processes is up to the programmer but it's super simple. Just use the fleet spawn command:

+Because fleet is designed specifically for managing applications with lots of tiny services, the deploy command isn't tied to running any processes. Starting processes is up to the programmer but it's super simple. Just use the `fleet spawn` command: -
fleet spawn -- node server.js 8080 
+``` +fleet spawn -- node server.js 8080 +``` -

By default fleet picks a drone at random to run the process on. You can specify which drone you want to run a particular process on with the --drone switch if it matters.

+By default fleet picks a drone at random to run the process on. You can specify which drone you want to run a particular process on with the `--drone` switch if it matters. -

Start a few processes across all your worker drones and then show what is running with the fleet ps command:

+Start a few processes across all your worker drones and then show what is running with the `fleet ps` command: -
fleet ps
+```
+fleet ps
 drone#3dfe17b8
 ├─┬ pid#1e99f4
 │ ├── status:   running
@@ -61,18 +74,20 @@ drone#3dfe17b8
 └─┬ pid#d7048a
   ├── status:   running
   ├── commit:   webapp/1b8050fcaf8f1b02b9175fcb422644cb67dc8cc5
-  └── command:  node server.js 8889
+ └── command: node server.js 8889 +``` -

Now suppose that you have new code to push out into production. By default, fleet lets you spin up new services without disturbing your existing services. If you fleet deploy again after checking in some new changes to git, the next time you fleet spawn a new process, that process will be spun up in a completely new directory based on the git commit hash. To stop a process, just use fleet stop.

+Now suppose that you have new code to push out into production. By default, fleet lets you spin up new services without disturbing your existing services. If you `fleet deploy` again after checking in some new changes to git, the next time you `fleet spawn` a new process, that process will be spun up in a completely new directory based on the git commit hash. To stop a process, just use `fleet stop`. -

This approach lets you verify that the new services work before bringing down the old services. You can even start experimenting with heterogeneous and incremental deployment by hooking into a custom http proxy!

+This approach lets you verify that the new services work before bringing down the old services. You can even start experimenting with heterogeneous and incremental deployment by hooking into a custom [http proxy](http://substack.net/posts/5bd18d)! -

Even better, if you use a service registry like seaport for managing the host/port tables, you can spin up new ad-hoc staging clusters all the time without disrupting the normal operation of your site before rolling out new code to users.

+Even better, if you use a service registry like [seaport](http://substack.net/posts/7a1c42) for managing the host/port tables, you can spin up new ad-hoc staging clusters all the time without disrupting the normal operation of your site before rolling out new code to users. -

Fleet has many more commands that you can learn about with its git-style manpage-based help system! Just do fleet help to get a list of all the commands you can run.

+Fleet has many more commands that you can learn about with its git-style manpage-based help system! Just do `fleet help` to get a list of all the commands you can run. -
fleet help
-Usage: fleet <command> [<args>]
+```
+fleet help
+Usage: fleet  []
 
 The commands are:
   deploy   Push code to drones.
@@ -85,8 +100,7 @@ The commands are:
   spawn    Run services on drones.
   stop     Stop processes running on drones.
 
-For help about a command, try `fleet help `.
+For help about a command, try `fleet help`. +``` -

npm install -g fleet and check out the code on github!

- -fleet +`npm install -g fleet` and [check out the code on github](https://github.com/substack/fleet)! diff --git a/locale/en/blog/npm/managing-node-js-dependencies-with-shrinkwrap.md b/locale/en/blog/npm/managing-node-js-dependencies-with-shrinkwrap.md index 7244576e14c7c..15da2ce5a6a5d 100644 --- a/locale/en/blog/npm/managing-node-js-dependencies-with-shrinkwrap.md +++ b/locale/en/blog/npm/managing-node-js-dependencies-with-shrinkwrap.md @@ -8,25 +8,25 @@ slug: managing-node-js-dependencies-with-shrinkwrap layout: blog-post.hbs --- -


-Photo by Luc Viatour (flickr)

+

+ + + +
+Photo by Luc Viatour (flickr) +

-

Managing dependencies is a fundamental problem in building complex software. The terrific success of github and npm have made code reuse especially easy in the Node world, where packages don't exist in isolation but rather as nodes in a large graph. The software is constantly changing (releasing new versions), and each package has its own constraints about what other packages it requires to run (dependencies). npm keeps track of these constraints, and authors express what kind of changes are compatible using semantic versioning, allowing authors to specify that their package will work with even future versions of its dependencies as long as the semantic versions are assigned properly. +Managing dependencies is a fundamental problem in building complex software. The terrific success of github and [npm](https://npmjs.com/) have made code reuse especially easy in the Node world, where packages don't exist in isolation but rather as nodes in a large graph. The software is constantly changing (releasing new versions), and each package has its own constraints about what other packages it requires to run (dependencies). npm keeps track of these constraints, and authors express what kind of changes are compatible using [semantic versioning](https://npmjs.com/doc/semver.html), allowing authors to specify that their package will work with even future versions of its dependencies as long as the semantic versions are assigned properly. -

-

This does mean that when you "npm install" a package with dependencies, there's no guarantee that you'll get the same set of code now that you would have gotten an hour ago, or that you would get if you were to run it again an hour later. You may get a bunch of bug fixes now that weren't available an hour ago. This is great during development, where you want to keep up with changes upstream. It's not necessarily what you want for deployment, though, where you want to validate whatever bits you're actually shipping. +This does mean that when you "npm install" a package with dependencies, there's no guarantee that you'll get the same set of code now that you would have gotten an hour ago, or that you would get if you were to run it again an hour later. You may get a bunch of bug fixes now that weren't available an hour ago. This is great during development, where you want to keep up with changes upstream. It's not necessarily what you want for deployment, though, where you want to validate whatever bits you're actually shipping. -

-

Put differently, it's understood that all software changes incur some risk, and it's critical to be able to manage this risk on your own terms. Taking that risk in development is good because by definition that's when you're incorporating and testing software changes. On the other hand, if you're shipping production software, you probably don't want to take this risk when cutting a release candidate (i.e. build time) or when you actually ship (i.e. deploy time) because you want to validate whatever you ship. +Put differently, **it's understood that all software changes incur some risk, and it's critical to be able to manage this risk on your own terms**. Taking that risk in development is good because by definition that's when you're incorporating and testing software changes. On the other hand, if you're shipping production software, you probably don't want to take this risk when cutting a release candidate (i.e. build time) or when you actually ship (i.e. deploy time) because you want to validate whatever you ship. -

-

You can address a simple case of this problem by only depending on specific versions of packages, allowing no semver flexibility at all, but this falls apart when you depend on packages that don't also adopt the same principle. Many of us at Joyent started wondering: can we generalize this approach? +You can address a simple case of this problem by only depending on specific versions of packages, allowing no semver flexibility at all, but this falls apart when you depend on packages that don't also adopt the same principle. Many of us at Joyent started wondering: can we generalize this approach? -

-

Shrinkwrapping packages

-

That brings us to npm shrinkwrap[1]: +## Shrinkwrapping packages -

+That brings us to [npm shrinkwrap](https://npmjs.com/doc/shrinkwrap.html)[1]: ``` NAME @@ -41,134 +41,132 @@ DESCRIPTION be used when your package is installed. ``` -

Let's consider package A: +Let's consider package A: -

-
{
-    "name": "A",
-    "version": "0.1.0",
-    "dependencies": {
-        "B": "<0.1.0"
+```json
+{
+    "name": "A",
+    "version": "0.1.0",
+    "dependencies": {
+        "B": "<0.1.0"
     }
-}
-

package B: +} +``` -

-
{
-    "name": "B",
-    "version": "0.0.1",
-    "dependencies": {
-        "C": "<0.1.0"
+package B:
+
+```json
+{
+    "name": "B",
+    "version": "0.0.1",
+    "dependencies": {
+        "C": "<0.1.0"
     }
-}
-

and package C: +} +``` -

-
{
-    "name": "C,
-    "version": "0.0.1"
-}
-

If these are the only versions of A, B, and C available in the registry, then a normal "npm install A" will install: +and package C: -

-
A@0.1.0
+```json
+{
+    "name": "C",
+    "version": "0.0.1"
+}
+```
+
+If these are the only versions of A, B, and C available in the registry, then a normal "npm install A" will install:
+
+```
+A@0.1.0
 └─┬ B@0.0.1
-  └── C@0.0.1
-

Then if B@0.0.2 is published, then a fresh "npm install A" will install: + └── C@0.0.1 +``` -

-
A@0.1.0
+Then if B\@0.0.2 is published, then a fresh "npm install A" will install:
+
+```
+A@0.1.0
 └─┬ B@0.0.2
-  └── C@0.0.1
-

assuming the new version did not modify B's dependencies. Of course, the new version of B could include a new version of C and any number of new dependencies. As we said before, if A's author doesn't want that, she could specify a dependency on B@0.0.1. But if A's author and B's author are not the same person, there's no way for A's author to say that she does not want to pull in newly published versions of C when B hasn't changed at all. + └── C@0.0.1 +``` -

-

In this case, A's author can use +assuming the new version did not modify B's dependencies. Of course, the new version of B could include a new version of C and any number of new dependencies. As we said before, if A's author doesn't want that, she could specify a dependency on B\@0.0.1. But if A's author and B's author are not the same person, there's no way for A's author to say that she does not want to pull in newly published versions of C when B hasn't changed at all. -

-
# npm shrinkwrap
-

This generates npm-shrinkwrap.json, which will look something like this: +In this case, A's author can use -

-
{
-    "name": "A",
-    "dependencies": {
-        "B": {
-            "version": "0.0.1",
-            "dependencies": {
-                "C": {  "version": "0.1.0" }
+```
+npm shrinkwrap
+```
+
+This generates npm-shrinkwrap.json, which will look something like this:
+
+```json
+{
+    "name": "A",
+    "dependencies": {
+        "B": {
+            "version": "0.0.1",
+            "dependencies": {
+                "C": {  "version": "0.1.0" }
             }
         }
     }
-}
-

The shrinkwrap command has locked down the dependencies based on what's currently installed in node_modules. When "npm install" installs a package with a npm-shrinkwrap.json file in the package root, the shrinkwrap file (rather than package.json files) completely drives the installation of that package and all of its dependencies (recursively). So now the author publishes A@0.1.0, and subsequent installs of this package will use B@0.0.1 and C@0.1.0, regardless the dependencies and versions listed in A's, B's, and C's package.json files. If the authors of B and C publish new versions, they won't be used to install A because the shrinkwrap refers to older versions. Even if you generate a new shrinkwrap, it will still reference the older versions, since "npm shrinkwrap" uses what's installed locally rather than what's available in the registry. +} +``` -

-

Using shrinkwrapped packages

-

Using a shrinkwrapped package is no different than using any other package: you can "npm install" it by hand, or add a dependency to your package.json file and "npm install" it. +The shrinkwrap command has locked down the dependencies based on what's currently installed in node\_modules. **When "npm install" installs a package with a npm-shrinkwrap.json file in the package root, the shrinkwrap file (rather than package.json files) completely drives the installation of that package and all of its dependencies (recursively).** So now the author publishes A\@0.1.0, and subsequent installs of this package will use B\@0.0.1 and C\@0.1.0, regardless the dependencies and versions listed in A's, B's, and C's package.json files. If the authors of B and C publish new versions, they won't be used to install A because the shrinkwrap refers to older versions. Even if you generate a new shrinkwrap, it will still reference the older versions, since "npm shrinkwrap" uses what's installed locally rather than what's available in the registry. -

-

Building shrinkwrapped packages

-

To shrinkwrap an existing package: +### Using shrinkwrapped packages -

-
    -
  1. Run "npm install" in the package root to install the current versions of all dependencies.
  2. -
  3. Validate that the package works as expected with these versions.
  4. -
  5. Run "npm shrinkwrap", add npm-shrinkwrap.json to git, and publish your package.
  6. -
-

To add or update a dependency in a shrinkwrapped package: +Using a shrinkwrapped package is no different than using any other package: you can "npm install" it by hand, or add a dependency to your package.json file and "npm install" it. -

-
    -
  1. Run "npm install" in the package root to install the current versions of all dependencies.
  2. -
  3. Add or update dependencies. "npm install" each new or updated package individually and then update package.json.
  4. -
  5. Validate that the package works as expected with the new dependencies.
  6. -
  7. Run "npm shrinkwrap", commit the new npm-shrinkwrap.json, and publish your package.
  8. -
-

You can still use npm outdated(1) to view which dependencies have newer versions available. +### Building shrinkwrapped packages -

-

For more details, check out the full docs on npm shrinkwrap, from which much of the above is taken. +To shrinkwrap an existing package: -

-

Why not just check node_modules into git?

-

One previously proposed solution is to "npm install" your dependencies during development and commit the results into source control. Then you deploy your app from a specific git SHA knowing you've got exactly the same bits that you tested in development. This does address the problem, but it has its own issues: for one, binaries are tricky because you need to "npm install" them to get their sources, but this builds the [system-dependent] binary too. You can avoid checking in the binaries and use "npm rebuild" at build time, but we've had a lot of difficulty trying to do this.[2] At best, this is second-class treatment for binary modules, which are critical for many important types of Node applications.[3] +1. Run "npm install" in the package root to install the current versions of all dependencies. +2. Validate that the package works as expected with these versions. +3. Run "npm shrinkwrap", add npm-shrinkwrap.json to git, and publish your package. -

-

Besides the issues with binary modules, this approach just felt wrong to many of us. There's a reason we don't check binaries into source control, and it's not just because they're platform-dependent. (After all, we could build and check in binaries for all supported platforms and operating systems.) It's because that approach is error-prone and redundant: error-prone because it introduces a new human failure mode where someone checks in a source change but doesn't regenerate all the binaries, and redundant because the binaries can always be built from the sources alone. An important principle of software version control is that you don't check in files derived directly from other files by a simple transformation.[4] Instead, you check in the original sources and automate the transformations via the build process. +To add or update a dependency in a shrinkwrapped package: -

-

Dependencies are just like binaries in this regard: they're files derived from a simple transformation of something else that is (or could easily be) already available: the name and version of the dependency. Checking them in has all the same problems as checking in binaries: people could update package.json without updating the checked-in module (or vice versa). Besides that, adding new dependencies has to be done by hand, introducing more opportunities for error (checking in the wrong files, not checking in certain files, inadvertently changing files, and so on). Our feeling was: why check in this whole dependency tree (and create a mess for binary add-ons) when we could just check in the package name and version and have the build process do the rest? +1. Run "npm install" in the package root to install the current versions of all dependencies. +2. Add or update dependencies. "npm install" each new or updated package individually and then update package.json. +3. Validate that the package works as expected with the new dependencies. +4. Run "npm shrinkwrap", commit the new npm-shrinkwrap.json, and publish your package. -

-

Finally, the approach of checking in node_modules doesn't really scale for us. We've got at least a dozen repos that will use restify, and it doesn't make sense to check that in everywhere when we could instead just specify which version each one is using. There's another principle at work here, which is separation of concerns: each repo specifies what it needs, while the build process figures out where to get it. +You can still use [npm outdated(1)](https://npmjs.com/doc/outdated.html) to view which dependencies have newer versions available. -

-

What if an author republishes an existing version of a package?

-

We're not suggesting deploying a shrinkwrapped package directly and running "npm install" to install from shrinkwrap in production. We already have a build process to deal with binary modules and other automateable tasks. That's where we do the "npm install". We tar up the result and distribute the tarball. Since we test each build before shipping, we won't deploy something we didn't test. +For more details, check out the full docs on [npm shrinkwrap](https://npmjs.com/doc/shrinkwrap.html), from which much of the above is taken. -

-

It's still possible to pick up newly published versions of existing packages at build time. We assume force publish is not that common in the first place, let alone force publish that breaks compatibility. If you're worried about this, you can use git SHAs in the shrinkwrap or even consider maintaining a mirror of the part of the npm registry that you use and require human confirmation before mirroring unpublishes. +## Why not just check `node_modules` into git? -

-

Final thoughts

-

Of course, the details of each use case matter a lot, and the world doesn't have to pick just one solution. If you like checking in node_modules, you should keep doing that. We've chosen the shrinkwrap route because that works better for us. +One previously [proposed solution](http://www.mikealrogers.com/posts/nodemodules-in-git.html) is to "npm install" your dependencies during development and commit the results into source control. Then you deploy your app from a specific git SHA knowing you've got exactly the same bits that you tested in development. This does address the problem, but it has its own issues: for one, binaries are tricky because you need to "npm install" them to get their sources, but this builds the \[system-dependent\] binary too. You can avoid checking in the binaries and use "npm rebuild" at build time, but we've had a lot of difficulty trying to do this.[2] At best, this is second-class treatment for binary modules, which are critical for many important types of Node applications.[3] -

-

It's not exactly news that Joyent is heavy on Node. Node is the heart of our SmartDataCenter (SDC) product, whose public-facing web portal, public API, Cloud Analytics, provisioning, billing, heartbeating, and other services are all implemented in Node. That's why it's so important to us to have robust components (like logging and REST) and tools for understanding production failures postmortem, profile Node apps in production, and now managing Node dependencies. Again, we're interested to hear feedback from others using these tools. +Besides the issues with binary modules, this approach just felt wrong to many of us. There's a reason we don't check binaries into source control, and it's not just because they're platform-dependent. (After all, we could build and check in binaries for all supported platforms and operating systems.) It's because that approach is error-prone and redundant: error-prone because it introduces a new human failure mode where someone checks in a source change but doesn't regenerate all the binaries, and redundant because the binaries can always be built from the sources alone. An important principle of software version control is that you don't check in files derived directly from other files by a simple transformation.[4] +Instead, you check in the original sources and automate the transformations via the build process. -

-
-Dave Pacheco blogs at dtrace.org. +Dependencies are just like binaries in this regard: they're files derived from a simple transformation of something else that is (or could easily be) already available: the name and version of the dependency. Checking them in has all the same problems as checking in binaries: people could update package.json without updating the checked-in module (or vice versa). Besides that, adding new dependencies has to be done by hand, introducing more opportunities for error (checking in the wrong files, not checking in certain files, inadvertently changing files, and so on). Our feeling was: why check in this whole dependency tree (and create a mess for binary add-ons) when we could just check in the package name and version and have the build process do the rest? -

[1] Much of this section is taken directly from the "npm shrinkwrap" documentation. +Finally, the approach of checking in node\_modules doesn't really scale for us. We've got at least a dozen repos that will use restify, and it doesn't make sense to check that in everywhere when we could instead just specify which version each one is using. There's another principle at work here, which is **separation of concerns**: each repo specifies _what_ it needs, while the build process figures out _where to get it_. -

-

[2] We've had a lot of trouble with checking in node_modules with binary dependencies. The first problem is figuring out exactly which files not to check in (.o, .node, .dynlib, .so, *.a, ...). When Mark went to apply this to one of our internal services, the "npm rebuild" step blew away half of the dependency tree because it ran "make clean", which in dependency ldapjs brings the repo to a clean slate by blowing away its dependencies. Later, a new (but highly experienced) engineer on our team was tasked with fixing a bug in our Node-based DHCP server. To fix the bug, we went with a new dependency. He tried checking in node_modules, which added 190,000 lines of code (to this repo that was previously a few hundred LOC). And despite doing everything he could think of to do this correctly and test it properly, the change broke the build because of the binary modules. So having tried this approach a few times now, it appears quite difficult to get right, and as I pointed out above, the lack of actual documentation and real world examples suggests others either aren't using binary modules (which we know isn't true) or haven't had much better luck with this approach. +## What if an author republishes an existing version of a package? -

-

[3] Like a good Node-based distributed system, our architecture uses lots of small HTTP servers. Each of these serves a REST API using restify. restify uses the binary module node-dtrace-provider, which gives each of our services deep DTrace-based observability for free. So literally almost all of our components are or will soon be depending on a binary add-on. Additionally, the foundation of Cloud Analytics are a pair of binary modules that extract data from DTrace and kstat. So this isn't a corner case for us, and we don't believe we're exceptional in this regard. The popular hiredis package for interfacing with redis from Node is also a binary module. +We're not suggesting deploying a shrinkwrapped package directly and running "npm install" to install from shrinkwrap in production. We already have a build process to deal with binary modules and other automateable tasks. That's where we do the "npm install". We tar up the result and distribute the tarball. Since we test each build before shipping, we won't deploy something we didn't test. -

-

[4] Note that I said this is an important principle for software version control, not using git in general. People use git for lots of things where checking in binaries and other derived files is probably fine. Also, I'm not interested in proselytizing; if you want to do this for software version control too, go ahead. But don't do it out of ignorance of existing successful software engineering practices.

+It's still possible to pick up newly published versions of existing packages at build time. We assume force publish is not that common in the first place, let alone force publish that breaks compatibility. If you're worried about this, you can use git SHAs in the shrinkwrap or even consider maintaining a mirror of the part of the npm registry that you use and require human confirmation before mirroring unpublishes. + +## Final thoughts + +Of course, the details of each use case matter a lot, and the world doesn't have to pick just one solution. If you like checking in node\_modules, you should keep doing that. We've chosen the shrinkwrap route because that works better for us. + +It's not exactly news that Joyent is heavy on Node. Node is the heart of our SmartDataCenter (SDC) product, whose public-facing web portal, public API, Cloud Analytics, provisioning, billing, heartbeating, and other services are all implemented in Node. That's why it's so important to us to have robust components (like [logging](https://github.com/trentm/node-bunyan) and [REST](http://mcavage.github.com/node-restify/)) and tools for [understanding production failures postmortem](http://dtrace.org/blogs/dap/2012/01/13/playing-with-nodev8-postmortem-debugging/), [profile Node apps in production](http://dtrace.org/blogs/dap/2012/01/05/where-does-your-node-program-spend-its-time/), and now managing Node dependencies. Again, we're interested to hear feedback from others using these tools. + +--- + +Dave Pacheco blogs at [dtrace.org](http://dtrace.org/blogs/dap/). + +

[1] Much of this section is taken directly from the "npm shrinkwrap" documentation.

+

[2] We've had a lot of trouble with checking in node_modules with binary dependencies. The first problem is figuring out exactly which files not to check in (.o, .node, .dynlib, .so, *.a, ...). When Mark went to apply this to one of our internal services, the "npm rebuild" step blew away half of the dependency tree because it ran "make clean", which in dependency ldapjs brings the repo to a clean slate by blowing away its dependencies. Later, a new (but highly experienced) engineer on our team was tasked with fixing a bug in our Node-based DHCP server. To fix the bug, we went with a new dependency. He tried checking in node_modules, which added 190,000 lines of code (to this repo that was previously a few hundred LOC). And despite doing everything he could think of to do this correctly and test it properly, the change broke the build because of the binary modules. So having tried this approach a few times now, it appears quite difficult to get right, and as I pointed out above, the lack of actual documentation and real world examples suggests others either aren't using binary modules (which we know isn't true) or haven't had much better luck with this approach.

+

[3] Like a good Node-based distributed system, our architecture uses lots of small HTTP servers. Each of these serves a REST API using restify. restify uses the binary module node-dtrace-provider, which gives each of our services deep DTrace-based observability for free. So literally almost all of our components are or will soon be depending on a binary add-on. Additionally, the foundation of Cloud Analytics are a pair of binary modules that extract data from DTrace and kstat. So this isn't a corner case for us, and we don't believe we're exceptional in this regard. The popular hiredis package for interfacing with redis from Node is also a binary module.

+

[4] Note that I said this is an important principle for software version control, not using git in general. People use git for lots of things where checking in binaries and other derived files is probably fine. Also, I'm not interested in proselytizing; if you want to do this for software version control too, go ahead. But don't do it out of ignorance of existing successful software engineering practices.

diff --git a/locale/en/blog/npm/npm-1-0-global-vs-local-installation.md b/locale/en/blog/npm/npm-1-0-global-vs-local-installation.md index 380eb5f486010..2006e7380c457 100644 --- a/locale/en/blog/npm/npm-1-0-global-vs-local-installation.md +++ b/locale/en/blog/npm/npm-1-0-global-vs-local-installation.md @@ -8,60 +8,67 @@ slug: npm-1-0-global-vs-local-installation layout: blog-post.hbs --- -

npm 1.0 is in release candidate mode. Go get it!

+_npm 1.0 is in release candidate mode. [Go get it!](http://groups.google.com/group/npm-/browse_thread/thread/43d3e76d71d1f141)_ -

More than anything else, the driving force behind the npm 1.0 rearchitecture was the desire to simplify what a package installation directory structure looks like.

+More than anything else, the driving force behind the npm 1.0 rearchitecture was the desire to simplify what a package installation directory structure looks like. -

In npm 0.x, there was a command called bundle that a lot of people liked. bundle let you install your dependencies locally in your project, but even still, it was basically a hack that never really worked very reliably.

+In npm 0.x, there was a command called `bundle` that a lot of people liked. `bundle` let you install your dependencies locally in your project, but even still, it was basically a hack that never really worked very reliably. -

Also, there was that activation/deactivation thing. That’s confusing.

+Also, there was that activation/deactivation thing. That’s confusing. -

Two paths

+## Two paths -

In npm 1.0, there are two ways to install things:

+In npm 1.0, there are two ways to install things: -
  1. globally —- This drops modules in {prefix}/lib/node_modules, and puts executable files in {prefix}/bin, where {prefix} is usually something like /usr/local. It also installs man pages in {prefix}/share/man, if they’re supplied.
  2. locally —- This installs your package in the current working directory. Node modules go in ./node_modules, executables go in ./node_modules/.bin/, and man pages aren’t installed at all.
+1. globally —- This drops modules in `{prefix}/lib/node_modules`, and puts executable files in `{prefix}/bin`, where `{prefix}` is usually something like `/usr/local`. It also installs man pages in `{prefix}/share/man`, if they’re supplied. +2. locally —- This installs your package in the current working directory. Node modules go in `./node_modules`, executables go in `./node_modules/.bin/`, and man pages aren’t installed at all. -

Which to choose

+## Which to choose -

Whether to install a package globally or locally depends on the global config, which is aliased to the -g command line switch.

+Whether to install a package globally or locally depends on the `global` config, which is aliased to the `-g` command line switch. -

Just like how global variables are kind of gross, but also necessary in some cases, global packages are important, but best avoided if not needed.

+Just like how global variables are kind of gross, but also necessary in some cases, global packages are important, but best avoided if not needed. -

In general, the rule of thumb is:

+In general, the rule of thumb is: -
  1. If you’re installing something that you want to use in your program, using require('whatever'), then install it locally, at the root of your project.
  2. If you’re installing something that you want to use in your shell, on the command line or something, install it globally, so that its binaries end up in your PATH environment variable.
+1. If you’re installing something that you want to use _in_ your program, using `require('whatever')`, then install it locally, at the root of your project. +2. If you’re installing something that you want to use in your _shell_, on the command line or something, install it globally, so that its binaries end up in your `PATH` environment variable. -

When you can't choose

+## When you can't choose -

Of course, there are some cases where you want to do both. Coffee-script and Express both are good examples of apps that have a command line interface, as well as a library. In those cases, you can do one of the following:

+Of course, there are some cases where you want to do both. [Coffee-script](http://coffeescript.org/) and [Express](http://expressjs.com/) both are good examples of apps that have a command line interface, as well as a library. In those cases, you can do one of the following: -
  1. Install it in both places. Seriously, are you that short on disk space? It’s fine, really. They’re tiny JavaScript programs.
  2. Install it globally, and then npm link coffee-script or npm link express (if you’re on a platform that supports symbolic links.) Then you only need to update the global copy to update all the symlinks as well.
+1. Install it in both places. Seriously, are you that short on disk space? It’s fine, really. They’re tiny JavaScript programs. +2. Install it globally, and then `npm link coffee-script` or `npm link express` (if you’re on a platform that supports symbolic links.) Then you only need to update the global copy to update all the symlinks as well. -

The first option is the best in my opinion. Simple, clear, explicit. The second is really handy if you are going to re-use the same library in a bunch of different projects. (More on npm link in a future installment.)

+The first option is the best in my opinion. Simple, clear, explicit. The second is really handy if you are going to re-use the same library in a bunch of different projects. (More on `npm link` in a future installment.) -

You can probably think of other ways to do it by messing with environment variables. But I don’t recommend those ways. Go with the grain.

+You can probably think of other ways to do it by messing with environment variables. But I don’t recommend those ways. Go with the grain. -

Slight exception: It’s not always the cwd.

+## Slight exception: It’s not always the cwd. -

Let’s say you do something like this:

+Let’s say you do something like this: -
cd ~/projects/foo     # go into my project
+```
+cd ~/projects/foo     # go into my project
 npm install express   # ./node_modules/express
 cd lib/utils          # move around in there
 vim some-thing.js     # edit some stuff, work work work
-npm install redis     # ./lib/utils/node_modules/redis!? ew.
+npm install redis # ./lib/utils/node_modules/redis!? ew. +``` -

In this case, npm will install redis into ~/projects/foo/node_modules/redis. Sort of like how git will work anywhere within a git repository, npm will work anywhere within a package, defined by having a node_modules folder.

+In this case, npm will install `redis` into `~/projects/foo/node_modules/redis`. Sort of like how git will work anywhere within a git repository, npm will work anywhere within a package, defined by having a `node_modules` folder. -

Test runners and stuff

+## Test runners and stuff -

If your package's scripts.test command uses a command-line program installed by one of your dependencies, not to worry. npm makes ./node_modules/.bin the first entry in the PATH environment variable when running any lifecycle scripts, so this will work fine, even if your program is not globally installed: +If your package's `scripts.test` command uses a command-line program installed by one of your dependencies, not to worry. npm makes `./node_modules/.bin` the first entry in the `PATH` environment variable when running any lifecycle scripts, so this will work fine, even if your program is not globally installed: -

{ "name" : "my-program"
+```
+{ "name" : "my-program"
 , "version" : "1.2.3"
 , "dependencies": { "express": "*", "coffee-script": "*" }
 , "devDependencies": { "vows": "*" }
 , "scripts":
   { "test": "vows test/*.js"
-  , "preinstall": "cake build" } }
+ , "preinstall": "cake build" } } +``` diff --git a/locale/en/blog/npm/npm-1-0-link.md b/locale/en/blog/npm/npm-1-0-link.md index d8fd1304742f6..a08ddb39391b4 100644 --- a/locale/en/blog/npm/npm-1-0-link.md +++ b/locale/en/blog/npm/npm-1-0-link.md @@ -8,74 +8,81 @@ slug: npm-1-0-link layout: blog-post.hbs --- -

npm 1.0 is in release candidate mode. Go get it!

+_npm 1.0 is in release candidate mode. [Go get it!](http://groups.google.com/group/npm-/browse_thread/thread/43d3e76d71d1f141)_ -

In npm 0.x, there was a command called link. With it, you could “link-install” a package so that changes would be reflected in real-time. This is especially handy when you’re actually building something. You could make a few changes, run the command again, and voila, your new code would be run without having to re-install every time.

+In npm 0.x, there was a command called `link`. With it, you could “link-install” a package so that changes would be reflected in real-time. This is especially handy when you’re actually building something. You could make a few changes, run the command again, and voila, your new code would be run without having to re-install every time. -

Of course, compiled modules still have to be rebuilt. That’s not ideal, but it’s a problem that will take more powerful magic to solve.

+Of course, compiled modules still have to be rebuilt. That’s not ideal, but it’s a problem that will take more powerful magic to solve. -

In npm 0.x, this was a pretty awful kludge. Back then, every package existed in some folder like:

+In npm 0.x, this was a pretty awful kludge. Back then, every package existed in some folder like: -
prefix/lib/node/.npm/my-package/1.3.6/package
-
+``` +prefix/lib/node/.npm/my-package/1.3.6/package +``` -

and the package’s version and name could be inferred from the path. Then, symbolic links were set up that looked like:

+and the package’s version and name could be inferred from the path. Then, symbolic links were set up that looked like: -
prefix/lib/node/my-package@1.3.6 -> ./.npm/my-package/1.3.6/package
-
+``` +prefix/lib/node/my-package@1.3.6 -> ./.npm/my-package/1.3.6/package +``` -

It was easy enough to point that symlink to a different location. However, since the package.json file could change, that meant that the connection between the version and the folder was not reliable.

+It was easy enough to point that symlink to a different location. However, since the _package.json file could change_, that meant that the connection between the version and the folder was not reliable. -

At first, this was just sort of something that we dealt with by saying, “Relink if you change the version.” However, as more and more edge cases arose, eventually the solution was to give link packages this fakey version of “9999.0.0-LINK-hash” so that npm knew it was an impostor. Sometimes the package was treated as if it had the 9999.0.0 version, and other times it was treated as if it had the version specified in the package.json.

+At first, this was just sort of something that we dealt with by saying, “Relink if you change the version.” However, as more and more edge cases arose, eventually the solution was to give link packages this fakey version of “9999.0.0-LINK-hash” so that npm knew it was an impostor. Sometimes the package was treated as if it had the 9999.0.0 version, and other times it was treated as if it had the version specified in the package.json. -

A better way

+## A better way -

For npm 1.0, we backed up and looked at what the actual use cases were. Most of the time when you link something you want one of the following:

+For npm 1.0, we backed up and looked at what the actual use cases were. Most of the time when you link something you want one of the following: -
    -
  1. globally install this package I’m working on so that I can run the command it creates and test its stuff as I work on it.
  2. -
  3. locally install my thing into some other thing that depends on it, so that the other thing can require() it.
  4. -
+1. globally install this package I’m working on so that I can run the command it creates and test its stuff as I work on it. +2. locally install my thing into some _other_ thing that depends on it, so that the other thing can `require()` it. -

And, in both cases, changes should be immediately apparent and not require any re-linking.

+And, in both cases, changes should be immediately apparent and not require any re-linking. -

Also, there’s a third use case that I didn’t really appreciate until I started writing more programs that had more dependencies:

+_Also_, there’s a third use case that I didn’t really appreciate until I started writing more programs that had more dependencies: -
  1. Globally install something, and use it in development in a bunch of projects, and then update them all at once so that they all use the latest version.

+ +3. Globally install something, and use it in development in a bunch of projects, and then update them all at once so that they all use the latest version. + -

Really, the second case above is a special-case of this third case.

+Really, the second case above is a special-case of this third case. - +## Link devel → global -

The first step is to link your local project into the global install space. (See global vs local installation for more on this global/local business.)

+The first step is to link your local project into the global install space. (See [global vs local installation](http://blog.nodejs.org/2011/03/23/npm-1-0-global-vs-local-installation/) for more on this global/local business.) -

I do this as I’m developing node projects (including npm itself).

+I do this as I’m developing node projects (including npm itself). -
cd ~/dev/js/node-tap  # go into the project dir
+```
+cd ~/dev/js/node-tap  # go into the project dir
 npm link              # create symlinks into {prefix}
-
-

Because of how I have my computer set up, with /usr/local as my install prefix, I end up with a symlink from /usr/local/lib/node_modules/tap pointing to ~/dev/js/node-tap, and the executable linked to /usr/local/bin/tap.

+``` -

Of course, if you set your paths differently, then you’ll have different results. (That’s why I tend to talk in terms of prefix rather than /usr/local.)

+Because of how I have my computer set up, with `/usr/local` as my install prefix, I end up with a symlink from `/usr/local/lib/node_modules/tap` pointing to `~/dev/js/node-tap`, and the executable linked to `/usr/local/bin/tap`. - +Of course, if you [set your paths differently](http://blog.nodejs.org/2011/04/04/development-environment/), then you’ll have different results. (That’s why I tend to talk in terms of `prefix` rather than `/usr/local`.) -

When you want to link the globally-installed package into your local development folder, you run npm link pkg where pkg is the name of the package that you want to install.

+## Link global → local -

For example, let’s say that I wanted to write some tap tests for my node-glob package. I’d first do the steps above to link tap into the global install space, and then I’d do this:

+When you want to link the globally-installed package into your local development folder, you run `npm link pkg` where `pkg` is the name of the package that you want to install. -
cd ~/dev/js/node-glob  # go to the project that uses the thing.
+For example, let’s say that I wanted to write some tap tests for my node-glob package. I’d _first_ do the steps above to link tap into the global install space, and _then_ I’d do this:
+
+```
+cd ~/dev/js/node-glob  # go to the project that uses the thing.
 npm link tap           # link the global thing into my project.
-
-

Now when I make changes in ~/dev/js/node-tap, they’ll be immediately reflected in ~/dev/js/node-glob/node_modules/tap.

+``` + +Now when I make changes in `~/dev/js/node-tap`, they’ll be immediately reflected in `~/dev/js/node-glob/node_modules/tap`. - +## Link to stuff you _don’t_ build -

Let’s say I have 15 sites that all use express. I want the benefits of local development, but I also want to be able to update all my dev folders at once. You can globally install express, and then link it into your local development folder.

+Let’s say I have 15 sites that all use express. I want the benefits of local development, but I also want to be able to update all my dev folders at once. You can globally install express, and then link it into your local development folder. -
npm install express -g  # install express globally
+```
+npm install express -g  # install express globally
 cd ~/dev/js/my-blog     # development folder one
 npm link express        # link the global express into ./node_modules
 cd ~/dev/js/photo-site  # other project folder
@@ -87,31 +94,31 @@ npm link express        # link express into here, as well
 
 npm update express -g   # update the global install.
                         # this also updates my project folders.
-
-

Caveat: Not For Real Servers

+``` -

npm link is a development tool. It’s awesome for managing packages on your local development box. But deploying with npm link is basically asking for problems, since it makes it super easy to update things without realizing it.

+## Caveat: Not For Real Servers -

Caveat 2: Sorry, Windows!

+npm link is a development tool. It’s _awesome_ for managing packages on your local development box. But deploying with npm link is basically asking for problems, since it makes it super easy to update things without realizing it. -

I highly doubt that a native Windows node will ever have comparable symbolic link support to what Unix systems provide. I know that there are junctions and such, and I've heard legends about symbolic links on Windows 7.

+## Caveat 2: Sorry, Windows! -

When there is a native windows port of Node, if that native windows port has `fs.symlink` and `fs.readlink` support that is exactly identical to the way that they work on Unix, then this should work fine.

+I highly doubt that a native Windows node will ever have comparable symbolic link support to what Unix systems provide. I know that there are junctions and such, and I've heard legends about symbolic links on Windows 7. -

But I wouldn't hold my breath. Any bugs about this not working on a native Windows system (ie, not Cygwin) will most likely be closed with wontfix.

+When there is a native windows port of Node, if that native windows port has \`fs.symlink\` and \`fs.readlink\` support that is exactly identical to the way that they work on Unix, then this should work fine. +But I wouldn't hold my breath. Any bugs about this not working on a native Windows system (ie, not Cygwin) will most likely be closed with `wontfix`. -

Aside: Credit where Credit’s Due

+## Aside: Credit where Credit’s Due -

Back before the Great Package Management Wars of Node 0.1, before npm or kiwi or mode or seed.js could do much of anything, and certainly before any of them had more than 2 users, Mikeal Rogers invited me to the Couch.io offices for lunch to talk about this npm registry thingie I’d mentioned wanting to build. (That is, to convince me to use CouchDB for it.)

+Back before the Great Package Management Wars of Node 0.1, before npm or kiwi or mode or seed.js could do much of anything, and certainly before any of them had more than 2 users, Mikeal Rogers invited me to the Couch.io offices for lunch to talk about this npm registry thingie I’d mentioned wanting to build. (That is, to convince me to use CouchDB for it.) -

Since he was volunteering to build the first version of it, and since couch is pretty much the ideal candidate for this use-case, it was an easy sell.

+Since he was volunteering to build the first version of it, and since couch is pretty much the ideal candidate for this use-case, it was an easy sell. -

While I was there, he said, “Look. You need to be able to link a project directory as if it was installed as a package, and then have it all Just Work. Can you do that?”

+While I was there, he said, “Look. You need to be able to link a project directory as if it was installed as a package, and then have it all Just Work. Can you do that?” -

I was like, “Well, I don’t know… I mean, there’s these edge cases, and it doesn’t really fit with the existing folder structure very well…”

+I was like, “Well, I don’t know… I mean, there’s these edge cases, and it doesn’t really fit with the existing folder structure very well…” -

“Dude. Either you do it, or I’m going to have to do it, and then there’ll be another package manager in node, instead of writing a registry for npm, and it won’t be as good anyway. Don’t be python.”

+“Dude. Either you do it, or I’m going to have to do it, and then there’ll be _another_ package manager in node, instead of writing a registry for npm, and it won’t be as good anyway. Don’t be python.” -

The rest is history.

+The rest is history. diff --git a/locale/en/blog/npm/npm-1-0-released.md b/locale/en/blog/npm/npm-1-0-released.md index abc105708d448..1f912b4a33b21 100644 --- a/locale/en/blog/npm/npm-1-0-released.md +++ b/locale/en/blog/npm/npm-1-0-released.md @@ -8,32 +8,46 @@ slug: npm-1-0-released layout: blog-post.hbs --- -

npm 1.0 has been released. Here are the highlights:

+npm 1.0 has been released. Here are the highlights: - +* [Global vs local installation](http://blog.nodejs.org/2011/03/23/npm-1-0-global-vs-local-installation/) +* [ls displays a tree](http://blog.nodejs.org/2011/03/17/npm-1-0-the-new-ls/), instead of being a remote search +* No more “activation” concept - dependencies are nested +* [Updates to link command](http://blog.nodejs.org/2011/04/06/npm-1-0-link/) +* Install script cleans up any 0.x cruft it finds. (That is, it removes old packages, so that they can be installed properly.) +* Simplified “search” command. One line per package, rather than one line per version. +* Renovated “completion” approach +* More help topics +* Simplified folder structure -

The focus is on npm being a development tool, rather than an apt-wannabe.

+The focus is on npm being a development tool, rather than an apt-wannabe. -

Installing it

+## Installing it -

To get the new version, run this command:

+To get the new version, run this command: -
curl https://npmjs.com/install.sh | sh 
+``` +curl https://npmjs.com/install.sh | sh +``` -

This will prompt to ask you if it’s ok to remove all the old 0.x cruft. If you want to not be asked, then do this:

+This will prompt to ask you if it’s ok to remove all the old 0.x cruft. If you want to not be asked, then do this: -
curl https://npmjs.com/install.sh | clean=yes sh 
+``` +curl https://npmjs.com/install.sh | clean=yes sh +``` -

Or, if you want to not do the cleanup, and leave the old stuff behind, then do this:

+Or, if you want to not do the cleanup, and leave the old stuff behind, then do this: -
curl https://npmjs.com/install.sh | clean=no sh 
+``` +curl https://npmjs.com/install.sh | clean=no sh +``` -

A lot of people in the node community were brave testers and helped make this release a lot better (and swifter) than it would have otherwise been. Thanks :)

+A lot of people in the node community were brave testers and helped make this release a lot better (and swifter) than it would have otherwise been. Thanks :) -

Code Freeze

+## Code Freeze -

npm will not have any major feature enhancements or architectural changes for at least 6 months. There are interesting developments planned that leverage npm in some ways, but it’s time to let the client itself settle. Also, I want to focus attention on some other problems for a little while.

+npm will not have any major feature enhancements or architectural changes for at least 6 months. There are interesting developments planned that leverage npm in some ways, but it’s time to let the client itself settle. Also, I want to focus attention on some other problems for a little while. -

Of course, bug reports are always welcome.

+Of course, [bug reports](https://github.com/isaacs/npm/issues) are always welcome. -

See you at NodeConf!

+See you at NodeConf! diff --git a/locale/en/blog/npm/npm-1-0-the-new-ls.md b/locale/en/blog/npm/npm-1-0-the-new-ls.md index b2b72067e91fa..5730519636c4e 100644 --- a/locale/en/blog/npm/npm-1-0-the-new-ls.md +++ b/locale/en/blog/npm/npm-1-0-the-new-ls.md @@ -8,48 +8,49 @@ slug: npm-1-0-the-new-ls layout: blog-post.hbs --- -

This is the first in a series of hopefully more than 1 posts, each detailing some aspect of npm 1.0.

+_This is the first in a series of hopefully more than 1 posts, each detailing some aspect of npm 1.0._ -

In npm 0.x, the ls command was a combination of both searching the registry as well as reporting on what you have installed.

+In npm 0.x, the `ls` command was a combination of both searching the registry as well as reporting on what you have installed. -

As the registry has grown in size, this has gotten unwieldy. Also, since npm 1.0 manages dependencies differently, nesting them in node_modules folder and installing locally by default, there are different things that you want to view.

+As the registry has grown in size, this has gotten unwieldy. Also, since npm 1.0 manages dependencies differently, nesting them in `node_modules` folder and installing locally by default, there are different things that you want to view. -

The functionality of the ls command was split into two different parts. search is now the way to find things on the registry (and it only reports one line per package, instead of one line per version), and ls shows a tree view of the packages that are installed locally.

+The functionality of the `ls` command was split into two different parts. `search` is now the way to find things on the registry (and it only reports one line per package, instead of one line per version), and `ls` shows a tree view of the packages that are installed locally. -

Here’s an example of the output:

+Here’s an example of the output: -
$ npm ls
+
$ npm ls
 npm@1.0.0 /Users/isaacs/dev-src/js/npm
-├── semver@1.0.1 
-├─┬ ronn@0.3.5 
-│ └── opts@1.2.1 
-└─┬ express@2.0.0rc3 extraneous 
-  ├─┬ connect@1.1.0 
-  │ ├── qs@0.0.7 
-  │ └── mime@1.2.1 
-  ├── mime@1.2.1 
+├── semver@1.0.1
+├─┬ ronn@0.3.5
+│ └── opts@1.2.1
+└─┬ express@2.0.0rc3 extraneous
+  ├─┬ connect@1.1.0
+  │ ├── qs@0.0.7
+  │ └── mime@1.2.1
+  ├── mime@1.2.1
   └── qs@0.0.7
 
-

This is after I’ve done npm install semver ronn express in the npm source directory. Since express isn’t actually a dependency of npm, it shows up with that “extraneous” marker.

+This is after I’ve done `npm install semver ronn express` in the npm source directory. Since express isn’t actually a dependency of npm, it shows up with that “extraneous” marker. -

Let’s see what happens when we create a broken situation:

+Let’s see what happens when we create a broken situation: -
$ rm -rf ./node_modules/express/node_modules/connect
+
$ rm -rf ./node_modules/express/node_modules/connect
 $ npm ls
 npm@1.0.0 /Users/isaacs/dev-src/js/npm
-├── semver@1.0.1 
-├─┬ ronn@0.3.5 
-│ └── opts@1.2.1 
-└─┬ express@2.0.0rc3 extraneous 
+├── semver@1.0.1
+├─┬ ronn@0.3.5
+│ └── opts@1.2.1
+└─┬ express@2.0.0rc3 extraneous
   ├── UNMET DEPENDENCY connect >= 1.1.0 < 2.0.0
-  ├── mime@1.2.1 
+  ├── mime@1.2.1
   └── qs@0.0.7
 
-

Tree views are great for human readability, but some times you want to pipe that stuff to another program. For that output, I took the same datastructure, but instead of building up a treeview string for each line, it spits out just the folders like this:

+Tree views are great for human readability, but some times you want to pipe that stuff to another program. For that output, I took the same datastructure, but instead of building up a treeview string for each line, it spits out just the folders like this: -
$ npm ls -p
+```
+$ npm ls -p
 /Users/isaacs/dev-src/js/npm
 /Users/isaacs/dev-src/js/npm/node_modules/semver
 /Users/isaacs/dev-src/js/npm/node_modules/ronn
@@ -60,43 +61,43 @@ npm@1.0.0 /Users/isaacs/dev-src/js/npm
 /Users/isaacs/dev-src/js/npm/node_modules/express/node_modules/connect/node_modules/mime
 /Users/isaacs/dev-src/js/npm/node_modules/express/node_modules/mime
 /Users/isaacs/dev-src/js/npm/node_modules/express/node_modules/qs
-
+``` -

Since you sometimes want a bigger view, I added the --long option to (shorthand: -l) to spit out more info:

+Since you sometimes want a bigger view, I added the `--long` option to (shorthand: `-l`) to spit out more info: -
$ npm ls -l
-npm@1.0.0 
+
$ npm ls -l
+npm@1.0.0
 │ /Users/isaacs/dev-src/js/npm
 │ A package manager for node
 │ git://github.com/isaacs/npm.git
 │ https://npmjs.com/
-├── semver@1.0.1 
+├── semver@1.0.1
 │   ./node_modules/semver
 │   The semantic version parser used by npm.
 │   git://github.com/isaacs/node-semver.git
-├─┬ ronn@0.3.5 
+├─┬ ronn@0.3.5
 │ │ ./node_modules/ronn
 │ │ markdown to roff and html converter
-│ └── opts@1.2.1 
+│ └── opts@1.2.1
 │     ./node_modules/ronn/node_modules/opts
 │     Command line argument parser written in the style of commonjs. To be used with node.js
-└─┬ express@2.0.0rc3 extraneous 
+└─┬ express@2.0.0rc3 extraneous
   │ ./node_modules/express
   │ Sinatra inspired web development framework
-  ├─┬ connect@1.1.0 
+  ├─┬ connect@1.1.0
   │ │ ./node_modules/express/node_modules/connect
   │ │ High performance middleware framework
   │ │ git://github.com/senchalabs/connect.git
-  │ ├── qs@0.0.7 
+  │ ├── qs@0.0.7
   │ │   ./node_modules/express/node_modules/connect/node_modules/qs
   │ │   querystring parser
-  │ └── mime@1.2.1 
+  │ └── mime@1.2.1
   │     ./node_modules/express/node_modules/connect/node_modules/mime
   │     A comprehensive library for mime-type mapping
-  ├── mime@1.2.1 
+  ├── mime@1.2.1
   │   ./node_modules/express/node_modules/mime
   │   A comprehensive library for mime-type mapping
-  └── qs@0.0.7 
+  └── qs@0.0.7
       ./node_modules/express/node_modules/qs
       querystring parser
 
@@ -113,22 +114,23 @@ $ npm ls -lp
 /Users/isaacs/dev-src/js/npm/node_modules/express/node_modules/qs:qs@0.0.7::::
 
-

And, if you want to get at the globally-installed modules, you can use ls with the global flag:

+And, if you want to get at the globally-installed modules, you can use ls with the global flag: -
$ npm ls -g
+```
+$ npm ls -g
 /usr/local
-├─┬ A@1.2.3 -> /Users/isaacs/dev-src/js/A
-│ ├── B@1.2.3 -> /Users/isaacs/dev-src/js/B
-│ └─┬ npm@0.3.15 
-│   └── semver@1.0.1 
-├─┬ B@1.2.3 -> /Users/isaacs/dev-src/js/B
-│ └── A@1.2.3 -> /Users/isaacs/dev-src/js/A
-├── glob@2.0.5 
-├─┬ npm@1.0.0 -> /Users/isaacs/dev-src/js/npm
-│ ├── semver@1.0.1 
-│ └─┬ ronn@0.3.5 
-│   └── opts@1.2.1 
-└── supervisor@0.1.2 -> /Users/isaacs/dev-src/js/node-supervisor
+├─┬ A@1.2.3 -> /Users/isaacs/dev-src/js/A
+│ ├── B@1.2.3 -> /Users/isaacs/dev-src/js/B
+│ └─┬ npm@0.3.15
+│   └── semver@1.0.1
+├─┬ B@1.2.3 -> /Users/isaacs/dev-src/js/B
+│ └── A@1.2.3 -> /Users/isaacs/dev-src/js/A
+├── glob@2.0.5
+├─┬ npm@1.0.0 -> /Users/isaacs/dev-src/js/npm
+│ ├── semver@1.0.1
+│ └─┬ ronn@0.3.5
+│   └── opts@1.2.1
+└── supervisor@0.1.2 -> /Users/isaacs/dev-src/js/node-supervisor
 
 $ npm ls -gpl
 /usr/local:::::
@@ -142,6 +144,6 @@ $ npm ls -gpl
 /usr/local/lib/node_modules/npm/node_modules/ronn:ronn@0.3.5::::/Users/isaacs/dev-src/js/npm/node_modules/ronn
 /usr/local/lib/node_modules/npm/node_modules/ronn/node_modules/opts:opts@1.2.1::::/Users/isaacs/dev-src/js/npm/node_modules/ronn/node_modules/opts
 /usr/local/lib/node_modules/supervisor:supervisor@0.1.2::::/Users/isaacs/dev-src/js/node-supervisor
-
+``` -

Those -> flags are indications that the package is link-installed, which will be covered in the next installment.

+Those `->` flags are indications that the package is link-installed, which will be covered in the next installment. diff --git a/locale/en/blog/npm/peer-dependencies.md b/locale/en/blog/npm/peer-dependencies.md index 76aa425650b81..9d218ad7f1010 100644 --- a/locale/en/blog/npm/peer-dependencies.md +++ b/locale/en/blog/npm/peer-dependencies.md @@ -7,9 +7,7 @@ slug: peer-dependencies layout: blog-post.hbs --- -Reposted from [Domenic's -blog](http://domenic.me/2013/02/08/peer-dependencies/) with -permission. Thanks! +_Reposted from [Domenic's blog](http://domenic.me/2013/02/08/peer-dependencies/) with permission. Thanks!_ npm is awesome as a package manager. In particular, it handles sub-dependencies very well: if my package depends on `request` version 2 and `some-other-library`, but `some-other-library` depends on `request` version 1, the resulting diff --git a/locale/en/blog/uncategorized/an-easy-way-to-build-scalable-network-programs.md b/locale/en/blog/uncategorized/an-easy-way-to-build-scalable-network-programs.md index f1e1391f92889..784df4f50567c 100644 --- a/locale/en/blog/uncategorized/an-easy-way-to-build-scalable-network-programs.md +++ b/locale/en/blog/uncategorized/an-easy-way-to-build-scalable-network-programs.md @@ -12,8 +12,8 @@ Suppose you're writing a web server which does video encoding on each file uploa Using Node does not mean that you have to write a video encoding algorithm in JavaScript (a language without even 64 bit integers) and crunch away in the main server event loop. The suggested approach is to separate the I/O bound task of receiving uploads and serving downloads from the compute bound task of video encoding. In the case of video encoding this is accomplished by forking out to ffmpeg. Node provides advanced means of asynchronously controlling subprocesses for work like this. -It has also been suggested that Node does not take advantage of multicore machines. Node has long supported load-balancing connections over multiple processes in just a few lines of code - in this way a Node server will use the available cores. In coming releases we'll make it even easier: just pass --balance on the command line and Node will manage the cluster of processes. +It has also been suggested that Node does not take advantage of multicore machines. Node has long supported load-balancing connections over multiple processes in just a few lines of code - in this way a Node server will use the available cores. In coming releases we'll make it even easier: just pass `--balance` on the command line and Node will manage the cluster of processes. Node has a clear purpose: provide an easy way to build scalable network programs. It is not a tool for every problem. Do not write a ray tracer with Node. Do not write a web browser with Node. Do however reach for Node if tasked with writing a DNS server, DHCP server, or even a video encoding server. -By relying on the kernel to schedule and preempt computationally expensive tasks and to load balance incoming connections, Node appears less magical than server platforms that employ userland scheduling. So far, our focus on simplicity and transparency has paid off: the number of success stories from developers and corporations who are adopting the technology continues to grow. +By relying on the kernel to schedule and preempt computationally expensive tasks and to load balance incoming connections, Node appears less magical than server platforms that employ userland scheduling. So far, our focus on simplicity and transparency has paid off: [the](http://www.joyent.com/blog/node-js-meetup-distributed-web-architectures/) [number](http://venturebeat.com/2011/08/16/linkedin-node/) [of](http://corp.klout.com/blog/2011/10/the-tech-behind-klout-com/) [success](http://www.joelonsoftware.com/items/2011/09/13.html) [stories](http://pow.cx/) from developers and corporations who are adopting the technology continues to grow. diff --git a/locale/en/blog/uncategorized/development-environment.md b/locale/en/blog/uncategorized/development-environment.md index 7fbd77c847c36..76687ac0b64ed 100644 --- a/locale/en/blog/uncategorized/development-environment.md +++ b/locale/en/blog/uncategorized/development-environment.md @@ -8,21 +8,29 @@ slug: development-environment layout: blog-post.hbs --- -If you're compiling a software package because you need a particular version (e.g. the latest), then it requires a little bit more maintenance than using a package manager like dpkg. Software that you compile yourself should *not* go into /usr, it should go into your home directory. This is part of being a software developer. +If you're compiling a software package because you need a particular version (e.g. the latest), then it requires a little bit more maintenance than using a package manager like `dpkg`. Software that you compile yourself should *not* go into `/usr`, it should go into your home directory. This is part of being a software developer. -One way of doing this is to install everything into $HOME/local/$PACKAGE. Here is how I install node on my machine:
./configure --prefix=$HOME/local/node-v0.4.5 && make install
+One way of doing this is to install everything into `$HOME/local/$PACKAGE`. Here is how I install node on my machine: -To have my paths automatically set I put this inside my $HOME/.zshrc:
PATH="$HOME/local/bin:/opt/local/bin:/usr/bin:/sbin:/bin"
+```bash
+./configure --prefix=$HOME/local/node-v0.4.5 && make install
+```
+
+To have my paths automatically set I put this inside my `$HOME/.zshrc`:
+
+```bash
+PATH="$HOME/local/bin:/opt/local/bin:/usr/bin:/sbin:/bin"
 LD_LIBRARY_PATH="/opt/local/lib:/usr/local/lib:/usr/lib"
 for i in $HOME/local/*; do
-  [ -d $i/bin ] && PATH="${i}/bin:${PATH}"
-  [ -d $i/sbin ] && PATH="${i}/sbin:${PATH}"
-  [ -d $i/include ] && CPATH="${i}/include:${CPATH}"
-  [ -d $i/lib ] && LD_LIBRARY_PATH="${i}/lib:${LD_LIBRARY_PATH}"
-  [ -d $i/lib/pkgconfig ] && PKG_CONFIG_PATH="${i}/lib/pkgconfig:${PKG_CONFIG_PATH}"
-  [ -d $i/share/man ] && MANPATH="${i}/share/man:${MANPATH}"
-done
+ [ -d $i/bin ] && PATH="${i}/bin:${PATH}" + [ -d $i/sbin ] && PATH="${i}/sbin:${PATH}" + [ -d $i/include ] && CPATH="${i}/include:${CPATH}" + [ -d $i/lib ] && LD_LIBRARY_PATH="${i}/lib:${LD_LIBRARY_PATH}" + [ -d $i/lib/pkgconfig ] && PKG_CONFIG_PATH="${i}/lib/pkgconfig:${PKG_CONFIG_PATH}" + [ -d $i/share/man ] && MANPATH="${i}/share/man:${MANPATH}" +done +``` -Node is under sufficiently rapid development that everyone should be compiling it themselves. A corollary of this is that npm (which should be installed alongside Node) does not require root to install packages. +Node is under sufficiently rapid development that *everyone* should be compiling it themselves. A corollary of this is that `npm` (which should be installed alongside Node) does not require root to install packages. -CPAN and RubyGems have blurred the lines between development tools and system package managers. With npm we wish to draw a clear line: it is not a system package manager. It is not for installing firefox or ffmpeg or OpenSSL; it is for rapidly downloading, building, and setting up Node packages. npm is a development tool. When a program written in Node becomes sufficiently mature it should be distributed as a tarball, .deb, .rpm, or other package system. It should not be distributed to end users with npm. +CPAN and RubyGems have blurred the lines between development tools and system package managers. With `npm` we wish to draw a clear line: it is not a system package manager. It is not for installing firefox or ffmpeg or OpenSSL; it is for rapidly downloading, building, and setting up Node packages. `npm` is a *development* tool. When a program written in Node becomes sufficiently mature it should be distributed as a tarball, `.deb`, `.rpm`, or other package system. It should not be distributed to end users with `npm`. diff --git a/locale/en/blog/uncategorized/growing-up.md b/locale/en/blog/uncategorized/growing-up.md index 57fd44251544a..3c53bea321782 100644 --- a/locale/en/blog/uncategorized/growing-up.md +++ b/locale/en/blog/uncategorized/growing-up.md @@ -8,8 +8,8 @@ slug: growing-up layout: blog-post.hbs --- -This week Microsoft announced support for Node in Windows Azure, their cloud computing platform. For the Node core team and the community, this is an important milestone. We've worked hard over the past six months reworking Node's machinery to support IO completion ports and Visual Studio to provide a good native port to Windows. The overarching goal of the port was to expand our user base to the largest number of developers. Happily, this has paid off in the form of being a first class citizen on Azure. Many users who would have never used Node as a pure unix tool are now up and running on the Windows platform. More users translates into a deeper and better ecosystem of modules, which makes for a better experience for everyone. +This week Microsoft announced [support for Node in Windows Azure](https://www.windowsazure.com/en-us/develop/nodejs/), their cloud computing platform. For the Node core team and the community, this is an important milestone. We've worked hard over the past six months reworking Node's machinery to support IO completion ports and Visual Studio to provide a good native port to Windows. The overarching goal of the port was to expand our user base to the largest number of developers. Happily, this has paid off in the form of being a first class citizen on Azure. Many users who would have never used Node as a pure unix tool are now up and running on the Windows platform. More users translates into a deeper and better ecosystem of modules, which makes for a better experience for everyone. -We also redesigned our website - something that we've put off for a long time because we felt that Node was too nascent to dedicate marketing to it. But now that we have binary distributions for Macintosh and Windows, have bundled npm, and are serving millions of users at various companies, we felt ready to indulge in a new website and share of a few of our success stories on the home page. +We also redesigned [our website](https://nodejs.org/) - something that we've put off for a long time because we felt that Node was too nascent to dedicate marketing to it. But now that we have binary distributions for Macintosh and Windows, have bundled npm, and are [serving millions of users](https://twitter.com/#!/mranney/status/145778414165569536) at various companies, we felt ready to indulge in a new website and share of a few of our success stories on the home page. Work is on-going. We continue to improve the software, making performance improvements and adding isolate support, but Node is growing up. diff --git a/locale/en/blog/uncategorized/jobs-nodejs-org.md b/locale/en/blog/uncategorized/jobs-nodejs-org.md index c75e9d9ad7696..efe36b082d3c9 100644 --- a/locale/en/blog/uncategorized/jobs-nodejs-org.md +++ b/locale/en/blog/uncategorized/jobs-nodejs-org.md @@ -14,4 +14,4 @@ We are starting an official jobs board for Node. There are two goals for this 2. Make some money. We work hard to build this platform and taking a small tax for job posts seems a like reasonable "tip jar". -jobs.nodejs.org +[http://jobs.nodejs.org](jobs.nodejs.org) diff --git a/locale/en/blog/uncategorized/libuv-status-report.md b/locale/en/blog/uncategorized/libuv-status-report.md index fcba0bcf9aaba..151bff2978181 100644 --- a/locale/en/blog/uncategorized/libuv-status-report.md +++ b/locale/en/blog/uncategorized/libuv-status-report.md @@ -8,41 +8,41 @@ slug: libuv-status-report layout: blog-post.hbs --- -We announced back in July that with Microsoft's support Joyent would be porting Node to Windows. This effort is ongoing but I thought it would be nice to make a status report post about the new platform library libuv which has resulted from porting Node to Windows. +We [announced](http://blog.nodejs.org/2011/06/23/porting-node-to-windows-with-microsoft%E2%80%99s-help/) back in July that with Microsoft's support Joyent would be porting Node to Windows. This effort is ongoing but I thought it would be nice to make a status report post about the new platform library [`libuv`](https://github.com/libuv/libuv) which has resulted from porting Node to Windows. -libuv's purpose is to abstract platform-dependent code in Node into one place where it can be tested for correctness and performance before bindings to V8 are added. Since Node is totally non-blocking, libuv turns out to be a rather useful library itself: a BSD-licensed, minimal, high-performance, cross-platform networking library. +`libuv`'s purpose is to abstract platform-dependent code in Node into one place where it can be tested for correctness and performance before bindings to V8 are added. Since Node is totally non-blocking, `libuv` turns out to be a rather useful library itself: a BSD-licensed, minimal, high-performance, cross-platform networking library. -We attempt to not reinvent the wheel where possible. The entire Unix backend sits heavily on Marc Lehmann's beautiful libraries libev and libeio. For DNS we integrated with Daniel Stenberg's C-Ares. For cross-platform build-system support we're relying on Chrome's GYP meta-build system. +We attempt to not reinvent the wheel where possible. The entire Unix backend sits heavily on Marc Lehmann's beautiful libraries [libev](http://software.schmorp.de/pkg/libev.html) and [libeio](http://software.schmorp.de/pkg/libeio.html). For DNS we integrated with Daniel Stenberg's [C-Ares](http://c-ares.haxx.se/). For cross-platform build-system support we're relying on Chrome's [GYP](http://code.google.com/p/gyp/) meta-build system. The current implemented features are: -
    -
  • Non-blocking TCP sockets (using IOCP on Windows)
  • -
  • Non-blocking named pipes
  • -
  • UDP
  • -
  • Timers
  • -
  • Child process spawning
  • -
  • Asynchronous DNS via c-ares or uv_getaddrinfo.
  • -
  • Asynchronous file system APIs uv_fs_*
  • -
  • High resolution time uv_hrtime
  • -
  • Current executable path look up uv_exepath
  • -
  • Thread pool scheduling uv_queue_work
  • -
+ +* Non-blocking TCP sockets (using IOCP on Windows) +* Non-blocking named pipes +* UDP +* Timers +* Child process spawning +* Asynchronous DNS via [c-ares](http://c-ares.haxx.se/) or `uv_getaddrinfo`. +* Asynchronous file system APIs `uv_fs_*` +* High resolution time `uv_hrtime` +* Current executable path look up `uv_exepath` +* Thread pool scheduling `uv_queue_work` + The features we are working on still are -
    -
  • File system events (Currently supports inotify, ReadDirectoryChangesW and will support kqueue and event ports in the near future.) uv_fs_event_t
  • -
  • VT100 TTY uv_tty_t
  • -
  • Socket sharing between processes uv_ipc_t (planned API)
  • -
-For complete documentation see the header file: include/uv.h. There are a number of tests in the test directory which demonstrate the API. - -libuv supports Microsoft Windows operating systems since Windows XP SP2. It can be built with either Visual Studio or MinGW. Solaris 121 and later using GCC toolchain. Linux 2.6 or better using the GCC toolchain. Macinotsh Darwin using the GCC or XCode toolchain. It is known to work on the BSDs but we do not check the build regularly. - -In addition to Node v0.5, a number of projects have begun to use libuv: - -We hope to see more people contributing and using libuv in the future! + +* File system events (Currently supports inotify, `ReadDirectoryChangesW` and will support kqueue and event ports in the near future.) `uv_fs_event_t` +* VT100 TTY `uv_tty_t` +* Socket sharing between processes `uv_ipc_t` ([planned API](https://gist.github.com/1233593)) + +For complete documentation see the header file: [include/uv.h](https://github.com/libuv/libuv/blob/03d0c57ea216abd611286ff1e58d4e344a459f76/include/uv.h). There are a number of tests in [the test directory](https://github.com/libuv/libuv/tree/3ca382be741ec6ce6a001f0db04d6375af8cd642/test) which demonstrate the API. + +`libuv` supports Microsoft Windows operating systems since Windows XP SP2. It can be built with either Visual Studio or MinGW. Solaris 121 and later using GCC toolchain. Linux 2.6 or better using the GCC toolchain. Macinotsh Darwin using the GCC or XCode toolchain. It is known to work on the BSDs but we do not check the build regularly. + +In addition to Node v0.5, a number of projects have begun to use `libuv`: + +* Mozilla's [Rust](https://github.com/graydon/rust) +* Tim Caswell's [LuaNode](https://github.com/creationix/luanode) +* Ben Noordhuis and Bert Belder's [Phode](https://github.com/bnoordhuis/phode) async PHP project +* Kerry Snyder's [libuv-csharp](https://github.com/kersny/libuv-csharp) +* Andrea Lattuada's [web server](https://gist.github.com/1195428) + +We hope to see more people contributing and using `libuv` in the future! diff --git a/locale/en/blog/uncategorized/node-meetup-this-thursday.md b/locale/en/blog/uncategorized/node-meetup-this-thursday.md index 73d4b31c402d3..ec699f72467e9 100644 --- a/locale/en/blog/uncategorized/node-meetup-this-thursday.md +++ b/locale/en/blog/uncategorized/node-meetup-this-thursday.md @@ -8,7 +8,7 @@ slug: node-meetup-this-thursday layout: blog-post.hbs --- -https://nodejs.org/meetup/ +https://nodejs.org/meetup/ http://nodemeetup.eventbrite.com/ Three companies will describe their distributed Node applications. Sign up soon, space is limited! diff --git a/locale/en/blog/uncategorized/office-hours.md b/locale/en/blog/uncategorized/office-hours.md index 1ab29e543ba7c..567049091ad78 100644 --- a/locale/en/blog/uncategorized/office-hours.md +++ b/locale/en/blog/uncategorized/office-hours.md @@ -8,8 +8,8 @@ slug: office-hours layout: blog-post.hbs --- -Starting next Thursday Isaac, Tom, and I will be holding weekly office hours at Joyent HQ in San Francisco. Office hours are meant to be subdued working time - there are no talks and no alcohol. Bring your bugs or just come and hack with us. +Starting next Thursday Isaac, Tom, and I will be holding weekly office hours at [Joyent HQ](http://maps.google.com/maps?q=345+California+St,+San+Francisco,+CA+94104&layer=c&sll=37.793040,-122.400491&cbp=13,178.31,,0,-60.77&cbll=37.793131,-122.400484&hl=en&sspn=0.006295,0.006295&ie=UTF8&hq=&hnear=345+California+St,+San+Francisco,+California+94104&ll=37.793131,-122.400484&spn=0.001295,0.003428&z=19&panoid=h0dlz3VG-hMKlzOu0LxMIg) in San Francisco. Office hours are meant to be subdued working time - there are no talks and no alcohol. Bring your bugs or just come and hack with us. -Our building requires that everyone attending be on a list so you must sign up at Event Brite. +Our building requires that everyone attending be on a list so you must sign up at [Event Brite](http://nodeworkup01.eventbrite.com/). We start at 4p and end promptly at 8p. diff --git a/locale/en/blog/uncategorized/porting-node-to-windows-with-microsofts-help.md b/locale/en/blog/uncategorized/porting-node-to-windows-with-microsofts-help.md index 56d0f8af2b4ec..8efd9fa2d7b50 100644 --- a/locale/en/blog/uncategorized/porting-node-to-windows-with-microsofts-help.md +++ b/locale/en/blog/uncategorized/porting-node-to-windows-with-microsofts-help.md @@ -8,8 +8,8 @@ slug: porting-node-to-windows-with-microsofts-help layout: blog-post.hbs --- -I'm pleased to announce that Microsoft is partnering with Joyent in formally contributing resources towards porting Node to Windows. As you may have heard in a talk we gave earlier this year, we have started the undertaking of a native port to Windows - targeting the high-performance IOCP API. - -This requires a rather large modification of the core structure, and we're very happy to have official guidance and engineering resources from Microsoft. Rackspace is also contributing Bert Belder's time to this undertaking. - +I'm pleased to announce that Microsoft is partnering with Joyent in formally contributing resources towards porting Node to Windows. As you may have heard in [a talk](/static/documents/nodeconf.pdf) we gave earlier this year, we have started the undertaking of a native port to Windows - targeting the high-performance IOCP API. + +This requires a rather large modification of the core structure, and we're very happy to have official guidance and engineering resources from Microsoft. [Rackspace](https://www.cloudkick.com/) is also contributing [Bert Belder](https://github.com/piscisaureus)'s time to this undertaking. + The result will be an official binary node.exe releases on nodejs.org, which will work on Windows Azure and other Windows versions as far back as Server 2003. diff --git a/locale/en/blog/uncategorized/profiling-node-js.md b/locale/en/blog/uncategorized/profiling-node-js.md index f23901f10fcdc..21bdeb8de0be5 100644 --- a/locale/en/blog/uncategorized/profiling-node-js.md +++ b/locale/en/blog/uncategorized/profiling-node-js.md @@ -8,55 +8,63 @@ slug: profiling-node-js layout: blog-post.hbs --- -It's incredibly easy to visualize where your Node program spends its time using DTrace and node-stackvis (a Node port of Brendan Gregg's FlameGraph tool): - -
    -
  1. Run your Node.js program as usual.
  2. -
  3. In another terminal, run: -
    $ dtrace -n 'profile-97/execname == "node" && arg1/{
    -    @[jstack(150, 8000)] = count(); } tick-60s { exit(0); }' > stacks.out
    - This will sample about 100 times per second for 60 seconds and emit results to stacks.out. Note that this will sample all running programs called "node". If you want a specific process, replace execname == "node" with pid == 12345 (the process id). -
  4. -
  5. Use the "stackvis" tool to transform this directly into a flame graph. First, install it: -
    $ npm install -g stackvis
    - then use stackvis to convert the DTrace output to a flamegraph: -
    $ stackvis dtrace flamegraph-svg < stacks.out > stacks.svg
    -
  6. -
  7. Open stacks.svg in your favorite browser.
  8. -
+It's incredibly easy to visualize where your Node program spends its time using DTrace and [node-stackvis](https://github.com/davepacheco/node-stackvis) (a Node port of Brendan Gregg's [FlameGraph](https://github.com/brendangregg/FlameGraph/) tool): + +1. Run your Node.js program as usual. +2. In another terminal, run: + + ``` + $ dtrace -n 'profile-97/execname == "node" && arg1/{ + @[jstack(150, 8000)] = count(); } tick-60s { exit(0); }' > stacks.out + ``` + + This will sample about 100 times per second for 60 seconds and emit results to stacks.out. **Note that this will sample all running programs called "node". If you want a specific process, replace `execname == "node"` with `pid == 12345` (the process id).** +3. Use the "stackvis" tool to transform this directly into a flame graph. First, install it: + + ``` + npm install -g stackvis + ``` + + then use `stackvis` to convert the DTrace output to a flamegraph: + + ``` + stackvis dtrace flamegraph-svg < stacks.out > stacks.svg + ``` + +4. Open stacks.svg in your favorite browser. You'll be looking at something like this: -'Hello World' HTTP server flame graph +[!['Hello World' HTTP server flame graph](https://cs.brown.edu/people/dapachec/helloworld.svg)](https://cs.brown.edu/people/dapachec/helloworld.svg) -This is a visualization of all of the profiled call stacks. This example is from the "hello world" HTTP server on the Node.js home page under load. Start at the bottom, where you have "main", which is present in most Node stacks because Node spends most on-CPU time in the main thread. Above each row, you have the functions called by the frame beneath it. As you move up, you'll see actual JavaScript function names. The boxes in each row are not in chronological order, but their width indicates how much time was spent there. When you hover over each box, you can see exactly what percentage of time is spent in each function. This lets you see at a glance where your program spends its time. +This is a visualization of all of the profiled call stacks. This example is from the "hello world" HTTP server on the [Node.js](https://nodejs.org) home page under load. Start at the bottom, where you have "main", which is present in most Node stacks because Node spends most on-CPU time in the main thread. Above each row, you have the functions called by the frame beneath it. As you move up, you'll see actual JavaScript function names. The boxes in each row are not in chronological order, but their width indicates how much time was spent there. When you hover over each box, you can see exactly what percentage of time is spent in each function. This lets you see at a glance where your program spends its time. That's the summary. There are a few prerequisites: -
    -
  • You must gather data on a system that supports DTrace with the Node.js ustack helper. For now, this pretty much means illumos-based systems like SmartOS, including the Joyent Cloud. MacOS users: OS X supports DTrace, but not ustack helpers. The way to get this changed is to contact your Apple developer liaison (if you're lucky enough to have one) or file a bug report at bugreport.apple.com. I'd suggest referencing existing bugs 5273057 and 11206497. More bugs filed (even if closed as dups) show more interest and make it more likely Apple will choose to fix this.
  • -
  • You must be on 32-bit Node.js 0.6.7 or later, built --with-dtrace. The helper doesn't work with 64-bit Node yet. On illumos (including SmartOS), development releases (the 0.7.x train) include DTrace support by default.
  • -
+* You must gather data on a system that supports DTrace with the Node.js ustack helper. For now, this pretty much means [illumos](http://illumos.org/)\-based systems like [SmartOS](http://smartos.org/), including the Joyent Cloud. **MacOS users:** OS X supports DTrace, but not ustack helpers. The way to get this changed is to contact your Apple developer liaison (if you're lucky enough to have one) or **file a bug report at bugreport.apple.com**. I'd suggest referencing existing bugs 5273057 and 11206497. More bugs filed (even if closed as dups) show more interest and make it more likely Apple will choose to fix this. +* You must be on 32-bit Node.js 0.6.7 or later, built `--with-dtrace`. The helper doesn't work with 64-bit Node yet. On illumos (including SmartOS), development releases (the 0.7.x train) include DTrace support by default. There are a few other notes: -
    -
  • You can absolutely profile apps in production, not just development, since compiling with DTrace support has very minimal overhead. You can start and stop profiling without restarting your program.
  • -
  • You may want to run the stacks.out output through c++filt to demangle C++ symbols. Be sure to use the c++filt that came with the compiler you used to build Node. For example: -
    c++filt < stacks.out > demangled.out
    +* You can absolutely profile apps **in production**, not just development, since compiling with DTrace support has very minimal overhead. You can start and stop profiling without restarting your program. +* You may want to run the stacks.out output through `c++filt` to demangle C++ symbols. Be sure to use the `c++filt` that came with the compiler you used to build Node. For example: + + ``` + c++filt < stacks.out > demangled.out + ``` + then you can use demangled.out to create the flamegraph. -
  • -
  • If you want, you can filter stacks containing a particular function. The best way to do this is to first collapse the original DTrace output, then grep out what you want: -
    -$ stackvis dtrace collapsed < stacks.out | grep SomeFunction > collapsed.out
    -$ stackvis collapsed flamegraph-svg < collapsed.out > stacks.svg
    -
  • -
  • If you've used Brendan's FlameGraph tools, you'll notice the coloring is a little different in the above flamegraph. I ported his tools to Node first so I could incorporate it more easily into other Node programs, but I've also been playing with different coloring options. The current default uses hue to denote stack depth and saturation to indicate time spent. (These are also indicated by position and size.) Other ideas include coloring by module (so V8, JavaScript, libc, etc. show up as different colors.) -
  • -
+* If you want, you can filter stacks containing a particular function. The best way to do this is to first collapse the original DTrace output, then grep out what you want: -For more on the underlying pieces, see my previous post on Node.js profiling and Brendan's post on Flame Graphs. + ``` + stackvis dtrace collapsed < stacks.out | grep SomeFunction > collapsed.out + stackvis collapsed flamegraph-svg < collapsed.out > stacks.svg + ``` -
+* If you've used Brendan's FlameGraph tools, you'll notice the coloring is a little different in the above flamegraph. I ported his tools to Node first so I could incorporate it more easily into other Node programs, but I've also been playing with different coloring options. The current default uses hue to denote stack depth and saturation to indicate time spent. (These are also indicated by position and size.) Other ideas include coloring by module (so V8, JavaScript, libc, etc. show up as different colors.) + +For more on the underlying pieces, see my [previous post on Node.js profiling](http://dtrace.org/blogs/dap/2012/01/05/where-does-your-node-program-spend-its-time/) and [Brendan's post on Flame Graphs](http://dtrace.org/blogs/brendan/2011/12/16/flame-graphs/). + +--- -Dave Pacheco blogs at dtrace.org +Dave Pacheco blogs at [dtrace.org](http://dtrace.org/blogs/dap) diff --git a/locale/en/blog/uncategorized/some-new-node-projects.md b/locale/en/blog/uncategorized/some-new-node-projects.md index bd4c75c9263df..d06fd7d8408b2 100644 --- a/locale/en/blog/uncategorized/some-new-node-projects.md +++ b/locale/en/blog/uncategorized/some-new-node-projects.md @@ -8,9 +8,10 @@ slug: some-new-node-projects layout: blog-post.hbs --- -
    -
  • Superfeedr released a Node XMPP Server. "Since astro had been doing an amazing work with his node-xmpp library to build Client, Components and even Server to server modules, the logical next step was to try to build a Client to Server module so that we could have a full blown server. That’s what we worked on the past couple days, and it’s now on Github!
  • +* Superfeedr released [a Node XMPP Server](http://blog.superfeedr.com/node-xmpp-server/). "_Since [astro](http://spaceboyz.net/~astro/) had been doing an **amazing work** with his [node-xmpp](https://github.com/astro/node-xmpp) library to build _Client_, _Components_ and even _Server to server_ modules, the logical next step was to try to build a _Client to Server_ module so that we could have a full blown server. That’s what we worked on the past couple days, and [it’s now on Github](https://github.com/superfeedr/node-xmpp)!_ -
  • Joyent's Mark Cavage released LDAP.js. "ldapjs is a pure JavaScript, from-scratch framework for implementing LDAP clients and servers in Node.js. It is intended for developers used to interacting with HTTP services in node and express.
  • +* Joyent's Mark Cavage released [LDAP.js](http://ldapjs.org/). "_ldapjs is a pure JavaScript, from-scratch framework for implementing [LDAP](http://tools.ietf.org/html/rfc4510) clients and servers in [Node.js](https://nodejs.org). It is intended for developers used to interacting with HTTP services in node and [express](http://expressjs.com)._ -
  • Microsoft's Tomasz Janczuk released iisnode "The iisnode project provides a native IIS 7.x module that allows hosting of node.js applications in IIS.

    Scott Hanselman posted a detailed walkthrough of how to get started with iisnode +* Microsoft's Tomasz Janczuk released [iisnode](http://tomasz.janczuk.org/2011/08/hosting-nodejs-applications-in-iis-on.html) "_The [iisnode](https://github.com/tjanczuk/iisnode) project provides a native IIS 7.x module that allows hosting of node.js applications in IIS._ + + Scott Hanselman posted [a detailed walkthrough](http://www.hanselman.com/blog/InstallingAndRunningNodejsApplicationsWithinIISOnWindowsAreYouMad.aspx) of how to get started with iisnode diff --git a/locale/en/blog/uncategorized/the-videos-from-node-meetup.md b/locale/en/blog/uncategorized/the-videos-from-node-meetup.md index 29aa28b0b1512..3554baa2714bb 100644 --- a/locale/en/blog/uncategorized/the-videos-from-node-meetup.md +++ b/locale/en/blog/uncategorized/the-videos-from-node-meetup.md @@ -10,4 +10,4 @@ layout: blog-post.hbs Uber, Voxer, and Joyent described how they use Node in production -http://www.joyent.com/blog/node-js-meetup-distributed-web-architectures/ + diff --git a/locale/en/blog/uncategorized/trademark.md b/locale/en/blog/uncategorized/trademark.md index c664559cda638..a00c4ffe352ae 100644 --- a/locale/en/blog/uncategorized/trademark.md +++ b/locale/en/blog/uncategorized/trademark.md @@ -16,5 +16,4 @@ Where does our trademark policy come from? We started by looking at popular open While we realize that any changes involving lawyers can be intimidating to the community we want to make this transition as smoothly as possible and welcome your questions and feedback on the policy and how we are implementing it. -trademark-policy.pdf -trademark@joyent.com +[trademark-policy.pdf](/static/documents/trademark-policy.pdf) trademark@joyent.com diff --git a/locale/en/blog/uncategorized/version-0-6.md b/locale/en/blog/uncategorized/version-0-6.md index d20be11789dd9..935393ac9b273 100644 --- a/locale/en/blog/uncategorized/version-0-6.md +++ b/locale/en/blog/uncategorized/version-0-6.md @@ -8,19 +8,8 @@ slug: version-0-6 layout: blog-post.hbs --- -Version 0.6.0 will be released next week. Please spend some time this -week upgrading your code to v0.5.10. Report any API differences at https://github.com/joyent/node/wiki/API-changes-between-v0.4-and-v0.6 -or report a bug to us at http://github.com/joyent/node/issues -if you hit problems. +Version 0.6.0 will be released next week. Please spend some time this week upgrading your code to v0.5.10. Report any API differences at or report a bug to us at if you hit problems. -The API changes between v0.4.12 and v0.5.10 are 99% cosmetic, minor, -and easy to fix. Most people are able to migrate their code in 10 -minutes. Don't fear. +The API changes between v0.4.12 and v0.5.10 are 99% cosmetic, minor, and easy to fix. Most people are able to migrate their code in 10 minutes. Don't fear. -Once you've ported your code to v0.5.10 please help out by testing -third party modules. Make bug reports. Encourage authors to publish -new versions of their modules. Go through the list of modules at npmjs.com and try out random -ones. This is especially encouraged of Windows users! +Once you've ported your code to v0.5.10 please help out by testing third party modules. Make bug reports. Encourage authors to publish new versions of their modules. Go through the list of modules at [npmjs.com](https://npmjs.com/) and try out random ones. This is especially encouraged of Windows users! diff --git a/locale/en/blog/vulnerability/http-server-security-vulnerability-please-upgrade-to-0-6-17.md b/locale/en/blog/vulnerability/http-server-security-vulnerability-please-upgrade-to-0-6-17.md index 1762d70dceae8..3ac28a844424e 100644 --- a/locale/en/blog/vulnerability/http-server-security-vulnerability-please-upgrade-to-0-6-17.md +++ b/locale/en/blog/vulnerability/http-server-security-vulnerability-please-upgrade-to-0-6-17.md @@ -7,11 +7,12 @@ category: vulnerability slug: http-server-security-vulnerability-please-upgrade-to-0-6-17 layout: blog-post.hbs --- + ## tl;dr - A carefully crafted attack request can cause the contents of the HTTP parser's buffer to be appended to the attacking request's header, making it appear to come from the attacker. Since it is generally safe to echo back contents of a request, this can allow an attacker to get an otherwise correctly designed server to divulge information about other requests. It is theoretically possible that it could enable header-spoofing attacks, though such an attack has not been demonstrated. - Versions affected: All versions of the 0.5/0.6 branch prior to 0.6.17, and all versions of the 0.7 branch prior to 0.7.8. Versions in the 0.4 branch are not affected. -- Fix: Upgrade to [v0.6.17](http://blog.nodejs.org/2012/05/04/version-0-6-17-stable/, or apply the fix in [c9a231d](https://github.com/joyent/node/commit/c9a231d) to your system. +- Fix: Upgrade to [v0.6.17](http://blog.nodejs.org/2012/05/04/version-0-6-17-stable/), or apply the fix in [c9a231d](https://github.com/joyent/node/commit/c9a231d) to your system. ## Details @@ -25,8 +26,8 @@ A few weeks ago, Matthew Daley found a security vulnerability in Node's HTTP > > The [attached files](https://gist.github.com/2628868) demonstrate the issue: > -> ``` -> $ ./node ~/stringptr-update-poc-server.js & +> ```bash +> $ ./node ~/stringptr-update-poc-server.js & > [1] 11801 > $ ~/stringptr-update-poc-client.py > HTTP/1.1 200 OK @@ -45,6 +46,6 @@ The fix landed on [7b3fb22](https://github.com/joyent/node/commit/7b3fb22) and [ The first releases with the fix are v0.7.8 and 0.6.17. So now is a good time to make a big deal about it. -If you are using node version 0.6 in production, please upgrade to at least [v0.6.17](http://blog.nodejs.org/2012/05/04/version-0-6-17-stable/), or at least apply the fix in [c9a231d](https://github.com/joyent/node/commit/c9a231d) to your system. (Version 0.6.17 also fixes some other important bugs, and is without doubt the most stable release of Node 0.6 to date, so it's a good idea to upgrade anyway.) +If you are using node version 0.6 in production, please upgrade to at least [v0.6.17](http://blog.nodejs.org/2012/05/04/version-0-6-17-stable/), or at least apply the fix in [c9a231d](https://github.com/joyent/node/commit/c9a231d) to your system. (Version 0.6.17 also fixes some other important bugs, and is without doubt the most stable release of Node 0.6 to date, so it's a good idea to upgrade anyway.) I'm extremely grateful that Matthew took the time to report the problem to us with such an elegant explanation, and in such a way that we had a reasonable amount of time to fix the issue before making it public.