aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorGravatar Jarred Sumner <709451+Jarred-Sumner@users.noreply.github.com> 2022-10-12 16:39:26 -0700
committerGravatar Jarred Sumner <709451+Jarred-Sumner@users.noreply.github.com> 2022-10-12 16:39:26 -0700
commitab4f4bca6da7bb8e4b2266537b332c3b2be5558f (patch)
treee098c5a9fd8cc785669a4c7929c091ace84ce736
parent73a611a3b9a999c9fd77a650b4f6fda71bacaf19 (diff)
downloadbun-ab4f4bca6da7bb8e4b2266537b332c3b2be5558f.tar.gz
bun-ab4f4bca6da7bb8e4b2266537b332c3b2be5558f.tar.zst
bun-ab4f4bca6da7bb8e4b2266537b332c3b2be5558f.zip
Add a doc on Bun.spawn
-rw-r--r--README.md184
1 files changed, 184 insertions, 0 deletions
diff --git a/README.md b/README.md
index 748ac0128..c8659ffc5 100644
--- a/README.md
+++ b/README.md
@@ -52,6 +52,8 @@ Bun automatically releases a canary build on every commit to `main`. To upgrade
bun upgrade --canary
```
+<sup>Canary builds are released without automated tests</sup>
+
## Table of Contents
- [Install](#install)
@@ -125,6 +127,7 @@ bun upgrade --canary
- [Usage](#usage-1)
- [Error handling](#error-handling)
- [`Bun.write` – optimizing I/O](#bunwrite--optimizing-io)
+- [`Bun.spawn` - spawn processes](#bunspawn)
- [bun:sqlite (SQLite3 module)](#bunsqlite-sqlite3-module)
- [bun:sqlite Benchmark](#bunsqlite-benchmark)
- [Getting started with bun:sqlite](#getting-started-with-bunsqlite)
@@ -1968,6 +1971,187 @@ server.stop();
The interface for `Bun.serve` is based on what [Cloudflare Workers](https://developers.cloudflare.com/workers/learning/migrating-to-module-workers/#module-workers-in-the-dashboard) does.
+## `Bun.spawn` – spawn a process
+
+`Bun.spawn` lets you quickly spawn a process. Available as of Bun v0.2.0.
+
+```ts
+import { spawn } from "bun";
+
+const { stdout } = spawn(["esbuild"], {
+ stdin: await fetch(
+ "https://raw.githubusercontent.com/oven-sh/bun/main/examples/hashing.js"
+ ),
+});
+
+const text = await new Response(stdout).text();
+console.log(text); // "const input = "hello world".repeat(400); ..."
+```
+
+Synchronous version:
+
+```ts
+import { spawnSync } from "bun";
+
+const { stdout } = spawnSync(["echo", "hi"]);
+
+// When using spawnSync, stdout is a Buffer
+// this lets you read from it synchronously
+const text = stdout.toString();
+
+console.log(text); // "hi\n"
+```
+
+You can pass an object as the second argument to customize the process:
+
+```ts
+import { spawn } from "bun";
+
+const { stdout } = spawn(["printenv", "FOO"], {
+ cwd: "/tmp",
+
+ env: {
+ ...process.env,
+ FOO: "bar",
+ },
+
+ // Disable stdin
+ stdin: null,
+
+ // Allow us to read from stdout
+ stdout: "pipe",
+
+ // Point stderr to write to "/tmp/stderr.log"
+ stderr: Bun.file("/tmp/stderr.log"),
+});
+
+const text = await new Response(stdout).text();
+console.log(text); // "bar\n"
+```
+
+You can also pass a `Bun.file` for `stdin`:
+
+```ts
+import { spawn } from "bun";
+
+await Bun.write("/tmp/foo.txt", "hi");
+const { stdout } = spawn(["cat"], {
+ stdin: Bun.file("/tmp/foo.txt"),
+ stdout: "pipe",
+});
+
+const text = await new Response(stdout).text();
+console.log(text); // "hi\n"
+```
+
+`stdin` also accepts a TypedArray:
+
+```ts
+import { spawn } from "bun";
+
+const { stdout } = spawn(["cat"], {
+ stdin: new TextEncoder().encode("hi"),
+ stdout: "pipe",
+});
+
+const text = await new Response(stdout).text();
+console.log(text); // "hi\n"
+```
+
+`Bun.spawn` also supports incrementally writing to stdin:
+
+```ts
+import { spawn } from "bun";
+
+const { stdin, stdout } = spawn(["cat"], {
+ stdin: "pipe",
+ stdout: "pipe",
+});
+
+// You can pass it strings or TypedArrays
+// Write "hi" to stdin
+stdin.write("hi");
+
+// By default, stdin is buffered so you need to call flush() to send it
+stdin.flush(true);
+
+// When you're done, call end()
+stdin.end();
+
+const text = await new Response(stdout).text();
+console.log(text); // "hi\n"
+```
+
+Under the hood, `Bun.spawn` and `Bun.spawnSync` use [`posix_spawn(3)`](https://man7.org/linux/man-pages/man3/posix_spawn.3.html).
+
+**stdin**
+
+`stdin` can be one of:
+
+- `Bun.file()`
+- `null` (no stdin)
+- `ArrayBufferView`
+- `Response`, `Request` with a buffered body or from `fetch()`. `ReadableStream` is not supported yet (TODO)
+- `number` (file descriptor)
+- `"pipe"` (default), which returns a `FileSink` for fast incremental writing
+- `"inherit"` which will inherit the parent's stdin
+
+**stdout** and **stderr**
+
+`stdout` and `stderr` can be one of:
+
+- `Bun.file()`
+- `null` (disable)
+- `number` (file descriptor)
+- `"pipe"` (default for `stdout`), returns a `ReadableStream`
+- `"inherit"` (default for `stderr`) which will inherit the parent's stdout/stderr
+
+**When to use `Bun.spawn` vs `Bun.spawnSync`**
+
+There are three main differences between `Bun.spawn` and `Bun.spawnSync`.
+
+1. `Bun.spawnSync` blocks the event loop until the subprocess exits. For HTTP servers, you probably should avoid using `Bun.spawnSync` but for CLI apps, you probably should use `Bun.spawnSync`.
+
+2. `stdout` and `stderr` return different objects
+
+| `spawn` | `spawnSync` |
+| ---------------- | ----------- |
+| `ReadableStream` | `Buffer` |
+
+3. `Bun.spawn` supports incrementally writing to `stdin`.
+
+If you need to read from `stdout` or `stderr` synchronously, you should use `Bun.spawnSync`. Otherwise, `Bun.spawn` is preferred.
+
+**More details**
+
+`Bun.spawn` returns a `Subprocess` object.
+
+More complete types are available in [`bun-types`](https://github.com/oven-sh/bun-types).
+
+```ts
+interface Subprocess {
+ readonly pid: number;
+ readonly stdin: FileSink | undefined;
+ readonly stdout: ReadableStream | number | undefined;
+ readonly stderr: ReadableStream | number | undefined;
+
+ readonly exitCode: number | undefined;
+
+ // Wait for the process to exit
+ readonly exited: Promise<number>;
+
+ // Keep Bun's process alive until the subprocess exits
+ ref(): void;
+
+ // Don't keep Bun's process alive until the subprocess exits
+ unref(): void;
+
+ // Kill the process
+ kill(code?: number): void;
+ readonly killed: boolean;
+}
+```
+
## `Bun.write` – optimizing I/O
`Bun.write` lets you write, copy or pipe files automatically using the fastest system calls compatible with the input and platform.