Merge branches/quickjs to trunk. This is the way.

git-svn-id: https://www.unprompted.com/svn/projects/tildefriends/trunk@3621 ed5197a5-7fde-0310-b194-c3ffbd925b24
This commit is contained in:
2021-01-02 18:10:00 +00:00
parent d293637741
commit 79022e1e1f
703 changed files with 419987 additions and 30640 deletions

22
deps/libuv/docs/src/guide/about.rst vendored Normal file
View File

@ -0,0 +1,22 @@
About
=====
`Nikhil Marathe <https://nikhilism.com>`_ started writing this book one
afternoon (June 16, 2012) when he didn't feel like programming. He had recently
been stung by the lack of good documentation on libuv while working on
`node-taglib <https://github.com/nikhilm/node-taglib>`_. Although reference
documentation was present, there were no comprehensive tutorials. This book is
the output of that need and tries to be accurate. That said, the book may have
mistakes. Pull requests are encouraged.
Nikhil is indebted to Marc Lehmann's comprehensive `man page
<http://pod.tst.eu/http://cvs.schmorp.de/libev/ev.pod>`_ about libev which
describes much of the semantics of the two libraries.
This book was made using `Sphinx <https://www.sphinx-doc.org>`_ and `vim
<https://www.vim.org>`_.
.. note::
In 2017 the libuv project incorporated the Nikhil's work into the official
documentation and it's maintained there henceforth.

219
deps/libuv/docs/src/guide/basics.rst vendored Normal file
View File

@ -0,0 +1,219 @@
Basics of libuv
===============
libuv enforces an **asynchronous**, **event-driven** style of programming. Its
core job is to provide an event loop and callback based notifications of I/O
and other activities. libuv offers core utilities like timers, non-blocking
networking support, asynchronous file system access, child processes and more.
Event loops
-----------
In event-driven programming, an application expresses interest in certain events
and respond to them when they occur. The responsibility of gathering events
from the operating system or monitoring other sources of events is handled by
libuv, and the user can register callbacks to be invoked when an event occurs.
The event-loop usually keeps running *forever*. In pseudocode:
.. code-block:: python
while there are still events to process:
e = get the next event
if there is a callback associated with e:
call the callback
Some examples of events are:
* File is ready for writing
* A socket has data ready to be read
* A timer has timed out
This event loop is encapsulated by ``uv_run()`` -- the end-all function when using
libuv.
The most common activity of systems programs is to deal with input and output,
rather than a lot of number-crunching. The problem with using conventional
input/output functions (``read``, ``fprintf``, etc.) is that they are
**blocking**. The actual write to a hard disk or reading from a network, takes
a disproportionately long time compared to the speed of the processor. The
functions don't return until the task is done, so that your program is doing
nothing. For programs which require high performance this is a major roadblock
as other activities and other I/O operations are kept waiting.
One of the standard solutions is to use threads. Each blocking I/O operation is
started in a separate thread (or in a thread pool). When the blocking function
gets invoked in the thread, the processor can schedule another thread to run,
which actually needs the CPU.
The approach followed by libuv uses another style, which is the **asynchronous,
non-blocking** style. Most modern operating systems provide event notification
subsystems. For example, a normal ``read`` call on a socket would block until
the sender actually sent something. Instead, the application can request the
operating system to watch the socket and put an event notification in the
queue. The application can inspect the events at its convenience (perhaps doing
some number crunching before to use the processor to the maximum) and grab the
data. It is **asynchronous** because the application expressed interest at one
point, then used the data at another point (in time and space). It is
**non-blocking** because the application process was free to do other tasks.
This fits in well with libuv's event-loop approach, since the operating system
events can be treated as just another libuv event. The non-blocking ensures
that other events can continue to be handled as fast as they come in [#]_.
.. NOTE::
How the I/O is run in the background is not of our concern, but due to the
way our computer hardware works, with the thread as the basic unit of the
processor, libuv and OSes will usually run background/worker threads and/or
polling to perform tasks in a non-blocking manner.
Bert Belder, one of the libuv core developers has a small video explaining the
architecture of libuv and its background. If you have no prior experience with
either libuv or libev, it is a quick, useful watch.
libuv's event loop is explained in more detail in the `documentation
<http://docs.libuv.org/en/v1.x/design.html#the-i-o-loop>`_.
.. raw:: html
<iframe width="560" height="315"
src="https://www.youtube-nocookie.com/embed/nGn60vDSxQ4" frameborder="0"
allowfullscreen></iframe>
Hello World
-----------
With the basics out of the way, let's write our first libuv program. It does
nothing, except start a loop which will exit immediately.
.. rubric:: helloworld/main.c
.. literalinclude:: ../../code/helloworld/main.c
:linenos:
This program quits immediately because it has no events to process. A libuv
event loop has to be told to watch out for events using the various API
functions.
Starting with libuv v1.0, users should allocate the memory for the loops before
initializing it with ``uv_loop_init(uv_loop_t *)``. This allows you to plug in
custom memory management. Remember to de-initialize the loop using
``uv_loop_close(uv_loop_t *)`` and then delete the storage. The examples never
close loops since the program quits after the loop ends and the system will
reclaim memory. Production grade projects, especially long running systems
programs, should take care to release correctly.
Default loop
++++++++++++
A default loop is provided by libuv and can be accessed using
``uv_default_loop()``. You should use this loop if you only want a single
loop.
.. note::
node.js uses the default loop as its main loop. If you are writing bindings
you should be aware of this.
.. _libuv-error-handling:
Error handling
--------------
Initialization functions or synchronous functions which may fail return a negative number on error. Async functions that may fail will pass a status parameter to their callbacks. The error messages are defined as ``UV_E*`` `constants`_.
.. _constants: http://docs.libuv.org/en/v1.x/errors.html#error-constants
You can use the ``uv_strerror(int)`` and ``uv_err_name(int)`` functions
to get a ``const char *`` describing the error or the error name respectively.
I/O read callbacks (such as for files and sockets) are passed a parameter ``nread``. If ``nread`` is less than 0, there was an error (UV_EOF is the end of file error, which you may want to handle differently).
Handles and Requests
--------------------
libuv works by the user expressing interest in particular events. This is
usually done by creating a **handle** to an I/O device, timer or process.
Handles are opaque structs named as ``uv_TYPE_t`` where type signifies what the
handle is used for.
.. rubric:: libuv watchers
.. code-block:: c
/* Handle types. */
typedef struct uv_loop_s uv_loop_t;
typedef struct uv_handle_s uv_handle_t;
typedef struct uv_dir_s uv_dir_t;
typedef struct uv_stream_s uv_stream_t;
typedef struct uv_tcp_s uv_tcp_t;
typedef struct uv_udp_s uv_udp_t;
typedef struct uv_pipe_s uv_pipe_t;
typedef struct uv_tty_s uv_tty_t;
typedef struct uv_poll_s uv_poll_t;
typedef struct uv_timer_s uv_timer_t;
typedef struct uv_prepare_s uv_prepare_t;
typedef struct uv_check_s uv_check_t;
typedef struct uv_idle_s uv_idle_t;
typedef struct uv_async_s uv_async_t;
typedef struct uv_process_s uv_process_t;
typedef struct uv_fs_event_s uv_fs_event_t;
typedef struct uv_fs_poll_s uv_fs_poll_t;
typedef struct uv_signal_s uv_signal_t;
/* Request types. */
typedef struct uv_req_s uv_req_t;
typedef struct uv_getaddrinfo_s uv_getaddrinfo_t;
typedef struct uv_getnameinfo_s uv_getnameinfo_t;
typedef struct uv_shutdown_s uv_shutdown_t;
typedef struct uv_write_s uv_write_t;
typedef struct uv_connect_s uv_connect_t;
typedef struct uv_udp_send_s uv_udp_send_t;
typedef struct uv_fs_s uv_fs_t;
typedef struct uv_work_s uv_work_t;
Handles represent long-lived objects. Async operations on such handles are
identified using **requests**. A request is short-lived (usually used across
only one callback) and usually indicates one I/O operation on a handle.
Requests are used to preserve context between the initiation and the callback
of individual actions. For example, an UDP socket is represented by
a ``uv_udp_t``, while individual writes to the socket use a ``uv_udp_send_t``
structure that is passed to the callback after the write is done.
Handles are setup by a corresponding::
uv_TYPE_init(uv_loop_t *, uv_TYPE_t *)
function.
Callbacks are functions which are called by libuv whenever an event the watcher
is interested in has taken place. Application specific logic will usually be
implemented in the callback. For example, an IO watcher's callback will receive
the data read from a file, a timer callback will be triggered on timeout and so
on.
Idling
++++++
Here is an example of using an idle handle. The callback is called once on
every turn of the event loop. A use case for idle handles is discussed in
:doc:`utilities`. Let us use an idle watcher to look at the watcher life cycle
and see how ``uv_run()`` will now block because a watcher is present. The idle
watcher is stopped when the count is reached and ``uv_run()`` exits since no
event watchers are active.
.. rubric:: idle-basic/main.c
.. literalinclude:: ../../code/idle-basic/main.c
:emphasize-lines: 6,10,14-17
Storing context
+++++++++++++++
In callback based programming style you'll often want to pass some 'context' --
application specific information -- between the call site and the callback. All
handles and requests have a ``void* data`` member which you can set to the
context and cast back in the callback. This is a common pattern used throughout
the C library ecosystem. In addition ``uv_loop_t`` also has a similar data
member.
----
.. [#] Depending on the capacity of the hardware of course.

View File

@ -0,0 +1,48 @@
Advanced event loops
====================
libuv provides considerable user control over event loops, and you can achieve
interesting results by juggling multiple loops. You can also embed libuv's
event loop into another event loop based library -- imagine a Qt based UI, and
Qt's event loop driving a libuv backend which does intensive system level
tasks.
Stopping an event loop
~~~~~~~~~~~~~~~~~~~~~~
``uv_stop()`` can be used to stop an event loop. The earliest the loop will
stop running is *on the next iteration*, possibly later. This means that events
that are ready to be processed in this iteration of the loop will still be
processed, so ``uv_stop()`` can't be used as a kill switch. When ``uv_stop()``
is called, the loop **won't** block for i/o on this iteration. The semantics of
these things can be a bit difficult to understand, so let's look at
``uv_run()`` where all the control flow occurs.
.. rubric:: src/unix/core.c - uv_run
.. literalinclude:: ../../../src/unix/core.c
:linenos:
:lines: 304-324
:emphasize-lines: 10,19,21
``stop_flag`` is set by ``uv_stop()``. Now all libuv callbacks are invoked
within the event loop, which is why invoking ``uv_stop()`` in them will still
lead to this iteration of the loop occurring. First libuv updates timers, then
runs pending timer, idle and prepare callbacks, and invokes any pending I/O
callbacks. If you were to call ``uv_stop()`` in any of them, ``stop_flag``
would be set. This causes ``uv_backend_timeout()`` to return ``0``, which is
why the loop does not block on I/O. If on the other hand, you called
``uv_stop()`` in one of the check handlers, I/O has already finished and is not
affected.
``uv_stop()`` is useful to shutdown a loop when a result has been computed or
there is an error, without having to ensure that all handlers are stopped one
by one.
Here is a simple example that stops the loop and demonstrates how the current
iteration of the loop still takes places.
.. rubric:: uvstop/main.c
.. literalinclude:: ../../code/uvstop/main.c
:linenos:
:emphasize-lines: 11

330
deps/libuv/docs/src/guide/filesystem.rst vendored Normal file
View File

@ -0,0 +1,330 @@
Filesystem
==========
Simple filesystem read/write is achieved using the ``uv_fs_*`` functions and the
``uv_fs_t`` struct.
.. note::
The libuv filesystem operations are different from :doc:`socket operations
<networking>`. Socket operations use the non-blocking operations provided
by the operating system. Filesystem operations use blocking functions
internally, but invoke these functions in a `thread pool`_ and notify
watchers registered with the event loop when application interaction is
required.
.. _thread pool: http://docs.libuv.org/en/v1.x/threadpool.html#thread-pool-work-scheduling
All filesystem functions have two forms - *synchronous* and *asynchronous*.
The *synchronous* forms automatically get called (and **block**) if the
callback is null. The return value of functions is a :ref:`libuv error code
<libuv-error-handling>`. This is usually only useful for synchronous calls.
The *asynchronous* form is called when a callback is passed and the return
value is 0.
Reading/Writing files
---------------------
A file descriptor is obtained using
.. code-block:: c
int uv_fs_open(uv_loop_t* loop, uv_fs_t* req, const char* path, int flags, int mode, uv_fs_cb cb)
``flags`` and ``mode`` are standard
`Unix flags <https://man7.org/linux/man-pages/man2/open.2.html>`_.
libuv takes care of converting to the appropriate Windows flags.
File descriptors are closed using
.. code-block:: c
int uv_fs_close(uv_loop_t* loop, uv_fs_t* req, uv_file file, uv_fs_cb cb)
Filesystem operation callbacks have the signature:
.. code-block:: c
void callback(uv_fs_t* req);
Let's see a simple implementation of ``cat``. We start with registering
a callback for when the file is opened:
.. rubric:: uvcat/main.c - opening a file
.. literalinclude:: ../../code/uvcat/main.c
:linenos:
:lines: 41-53
:emphasize-lines: 4, 6-7
The ``result`` field of a ``uv_fs_t`` is the file descriptor in case of the
``uv_fs_open`` callback. If the file is successfully opened, we start reading it.
.. rubric:: uvcat/main.c - read callback
.. literalinclude:: ../../code/uvcat/main.c
:linenos:
:lines: 26-40
:emphasize-lines: 2,8,12
In the case of a read call, you should pass an *initialized* buffer which will
be filled with data before the read callback is triggered. The ``uv_fs_*``
operations map almost directly to certain POSIX functions, so EOF is indicated
in this case by ``result`` being 0. In the case of streams or pipes, the
``UV_EOF`` constant would have been passed as a status instead.
Here you see a common pattern when writing asynchronous programs. The
``uv_fs_close()`` call is performed synchronously. *Usually tasks which are
one-off, or are done as part of the startup or shutdown stage are performed
synchronously, since we are interested in fast I/O when the program is going
about its primary task and dealing with multiple I/O sources*. For solo tasks
the performance difference usually is negligible and may lead to simpler code.
Filesystem writing is similarly simple using ``uv_fs_write()``. *Your callback
will be triggered after the write is complete*. In our case the callback
simply drives the next read. Thus read and write proceed in lockstep via
callbacks.
.. rubric:: uvcat/main.c - write callback
.. literalinclude:: ../../code/uvcat/main.c
:linenos:
:lines: 16-24
:emphasize-lines: 6
.. warning::
Due to the way filesystems and disk drives are configured for performance,
a write that 'succeeds' may not be committed to disk yet.
We set the dominos rolling in ``main()``:
.. rubric:: uvcat/main.c
.. literalinclude:: ../../code/uvcat/main.c
:linenos:
:lines: 55-
:emphasize-lines: 2
.. warning::
The ``uv_fs_req_cleanup()`` function must always be called on filesystem
requests to free internal memory allocations in libuv.
Filesystem operations
---------------------
All the standard filesystem operations like ``unlink``, ``rmdir``, ``stat`` are
supported asynchronously and have intuitive argument order. They follow the
same patterns as the read/write/open calls, returning the result in the
``uv_fs_t.result`` field. The full list:
.. rubric:: Filesystem operations
.. code-block:: c
int uv_fs_close(uv_loop_t* loop, uv_fs_t* req, uv_file file, uv_fs_cb cb);
int uv_fs_open(uv_loop_t* loop, uv_fs_t* req, const char* path, int flags, int mode, uv_fs_cb cb);
int uv_fs_read(uv_loop_t* loop, uv_fs_t* req, uv_file file, const uv_buf_t bufs[], unsigned int nbufs, int64_t offset, uv_fs_cb cb);
int uv_fs_unlink(uv_loop_t* loop, uv_fs_t* req, const char* path, uv_fs_cb cb);
int uv_fs_write(uv_loop_t* loop, uv_fs_t* req, uv_file file, const uv_buf_t bufs[], unsigned int nbufs, int64_t offset, uv_fs_cb cb);
int uv_fs_copyfile(uv_loop_t* loop, uv_fs_t* req, const char* path, const char* new_path, int flags, uv_fs_cb cb);
int uv_fs_mkdir(uv_loop_t* loop, uv_fs_t* req, const char* path, int mode, uv_fs_cb cb);
int uv_fs_mkdtemp(uv_loop_t* loop, uv_fs_t* req, const char* tpl, uv_fs_cb cb);
int uv_fs_rmdir(uv_loop_t* loop, uv_fs_t* req, const char* path, uv_fs_cb cb);
int uv_fs_scandir(uv_loop_t* loop, uv_fs_t* req, const char* path, int flags, uv_fs_cb cb);
int uv_fs_scandir_next(uv_fs_t* req, uv_dirent_t* ent);
int uv_fs_opendir(uv_loop_t* loop, uv_fs_t* req, const char* path, uv_fs_cb cb);
int uv_fs_readdir(uv_loop_t* loop, uv_fs_t* req, uv_dir_t* dir, uv_fs_cb cb);
int uv_fs_closedir(uv_loop_t* loop, uv_fs_t* req, uv_dir_t* dir, uv_fs_cb cb);
int uv_fs_stat(uv_loop_t* loop, uv_fs_t* req, const char* path, uv_fs_cb cb);
int uv_fs_fstat(uv_loop_t* loop, uv_fs_t* req, uv_file file, uv_fs_cb cb);
int uv_fs_rename(uv_loop_t* loop, uv_fs_t* req, const char* path, const char* new_path, uv_fs_cb cb);
int uv_fs_fsync(uv_loop_t* loop, uv_fs_t* req, uv_file file, uv_fs_cb cb);
int uv_fs_fdatasync(uv_loop_t* loop, uv_fs_t* req, uv_file file, uv_fs_cb cb);
int uv_fs_ftruncate(uv_loop_t* loop, uv_fs_t* req, uv_file file, int64_t offset, uv_fs_cb cb);
int uv_fs_sendfile(uv_loop_t* loop, uv_fs_t* req, uv_file out_fd, uv_file in_fd, int64_t in_offset, size_t length, uv_fs_cb cb);
int uv_fs_access(uv_loop_t* loop, uv_fs_t* req, const char* path, int mode, uv_fs_cb cb);
int uv_fs_chmod(uv_loop_t* loop, uv_fs_t* req, const char* path, int mode, uv_fs_cb cb);
int uv_fs_utime(uv_loop_t* loop, uv_fs_t* req, const char* path, double atime, double mtime, uv_fs_cb cb);
int uv_fs_futime(uv_loop_t* loop, uv_fs_t* req, uv_file file, double atime, double mtime, uv_fs_cb cb);
int uv_fs_lstat(uv_loop_t* loop, uv_fs_t* req, const char* path, uv_fs_cb cb);
int uv_fs_link(uv_loop_t* loop, uv_fs_t* req, const char* path, const char* new_path, uv_fs_cb cb);
int uv_fs_symlink(uv_loop_t* loop, uv_fs_t* req, const char* path, const char* new_path, int flags, uv_fs_cb cb);
int uv_fs_readlink(uv_loop_t* loop, uv_fs_t* req, const char* path, uv_fs_cb cb);
int uv_fs_realpath(uv_loop_t* loop, uv_fs_t* req, const char* path, uv_fs_cb cb);
int uv_fs_fchmod(uv_loop_t* loop, uv_fs_t* req, uv_file file, int mode, uv_fs_cb cb);
int uv_fs_chown(uv_loop_t* loop, uv_fs_t* req, const char* path, uv_uid_t uid, uv_gid_t gid, uv_fs_cb cb);
int uv_fs_fchown(uv_loop_t* loop, uv_fs_t* req, uv_file file, uv_uid_t uid, uv_gid_t gid, uv_fs_cb cb);
int uv_fs_lchown(uv_loop_t* loop, uv_fs_t* req, const char* path, uv_uid_t uid, uv_gid_t gid, uv_fs_cb cb);
.. _buffers-and-streams:
Buffers and Streams
-------------------
The basic I/O handle in libuv is the stream (``uv_stream_t``). TCP sockets, UDP
sockets, and pipes for file I/O and IPC are all treated as stream subclasses.
Streams are initialized using custom functions for each subclass, then operated
upon using
.. code-block:: c
int uv_read_start(uv_stream_t*, uv_alloc_cb alloc_cb, uv_read_cb read_cb);
int uv_read_stop(uv_stream_t*);
int uv_write(uv_write_t* req, uv_stream_t* handle,
const uv_buf_t bufs[], unsigned int nbufs, uv_write_cb cb);
The stream based functions are simpler to use than the filesystem ones and
libuv will automatically keep reading from a stream when ``uv_read_start()`` is
called once, until ``uv_read_stop()`` is called.
The discrete unit of data is the buffer -- ``uv_buf_t``. This is simply
a collection of a pointer to bytes (``uv_buf_t.base``) and the length
(``uv_buf_t.len``). The ``uv_buf_t`` is lightweight and passed around by value.
What does require management is the actual bytes, which have to be allocated
and freed by the application.
.. ERROR::
THIS PROGRAM DOES NOT ALWAYS WORK, NEED SOMETHING BETTER**
To demonstrate streams we will need to use ``uv_pipe_t``. This allows streaming
local files [#]_. Here is a simple tee utility using libuv. Doing all operations
asynchronously shows the power of evented I/O. The two writes won't block each
other, but we have to be careful to copy over the buffer data to ensure we don't
free a buffer until it has been written.
The program is to be executed as::
./uvtee <output_file>
We start off opening pipes on the files we require. libuv pipes to a file are
opened as bidirectional by default.
.. rubric:: uvtee/main.c - read on pipes
.. literalinclude:: ../../code/uvtee/main.c
:linenos:
:lines: 61-80
:emphasize-lines: 4,5,15
The third argument of ``uv_pipe_init()`` should be set to 1 for IPC using named
pipes. This is covered in :doc:`processes`. The ``uv_pipe_open()`` call
associates the pipe with the file descriptor, in this case ``0`` (standard
input).
We start monitoring ``stdin``. The ``alloc_buffer`` callback is invoked as new
buffers are required to hold incoming data. ``read_stdin`` will be called with
these buffers.
.. rubric:: uvtee/main.c - reading buffers
.. literalinclude:: ../../code/uvtee/main.c
:linenos:
:lines: 19-22,44-60
The standard ``malloc`` is sufficient here, but you can use any memory allocation
scheme. For example, node.js uses its own slab allocator which associates
buffers with V8 objects.
The read callback ``nread`` parameter is less than 0 on any error. This error
might be EOF, in which case we close all the streams, using the generic close
function ``uv_close()`` which deals with the handle based on its internal type.
Otherwise ``nread`` is a non-negative number and we can attempt to write that
many bytes to the output streams. Finally remember that buffer allocation and
deallocation is application responsibility, so we free the data.
The allocation callback may return a buffer with length zero if it fails to
allocate memory. In this case, the read callback is invoked with error
UV_ENOBUFS. libuv will continue to attempt to read the stream though, so you
must explicitly call ``uv_close()`` if you want to stop when allocation fails.
The read callback may be called with ``nread = 0``, indicating that at this
point there is nothing to be read. Most applications will just ignore this.
.. rubric:: uvtee/main.c - Write to pipe
.. literalinclude:: ../../code/uvtee/main.c
:linenos:
:lines: 9-13,23-42
``write_data()`` makes a copy of the buffer obtained from read. This buffer
does not get passed through to the write callback trigged on write completion. To
get around this we wrap a write request and a buffer in ``write_req_t`` and
unwrap it in the callbacks. We make a copy so we can free the two buffers from
the two calls to ``write_data`` independently of each other. While acceptable
for a demo program like this, you'll probably want smarter memory management,
like reference counted buffers or a pool of buffers in any major application.
.. WARNING::
If your program is meant to be used with other programs it may knowingly or
unknowingly be writing to a pipe. This makes it susceptible to `aborting on
receiving a SIGPIPE`_. It is a good idea to insert::
signal(SIGPIPE, SIG_IGN)
in the initialization stages of your application.
.. _aborting on receiving a SIGPIPE: http://pod.tst.eu/http://cvs.schmorp.de/libev/ev.pod#The_special_problem_of_SIGPIPE
File change events
------------------
All modern operating systems provide APIs to put watches on individual files or
directories and be informed when the files are modified. libuv wraps common
file change notification libraries [#fsnotify]_. This is one of the more
inconsistent parts of libuv. File change notification systems are themselves
extremely varied across platforms so getting everything working everywhere is
difficult. To demonstrate, I'm going to build a simple utility which runs
a command whenever any of the watched files change::
./onchange <command> <file1> [file2] ...
The file change notification is started using ``uv_fs_event_init()``:
.. rubric:: onchange/main.c - The setup
.. literalinclude:: ../../code/onchange/main.c
:linenos:
:lines: 26-
:emphasize-lines: 15
The third argument is the actual file or directory to monitor. The last
argument, ``flags``, can be:
.. code-block:: c
/*
* Flags to be passed to uv_fs_event_start().
*/
enum uv_fs_event_flags {
UV_FS_EVENT_WATCH_ENTRY = 1,
UV_FS_EVENT_STAT = 2,
UV_FS_EVENT_RECURSIVE = 4
};
``UV_FS_EVENT_WATCH_ENTRY`` and ``UV_FS_EVENT_STAT`` don't do anything (yet).
``UV_FS_EVENT_RECURSIVE`` will start watching subdirectories as well on
supported platforms.
The callback will receive the following arguments:
#. ``uv_fs_event_t *handle`` - The handle. The ``path`` field of the handle
is the file on which the watch was set.
#. ``const char *filename`` - If a directory is being monitored, this is the
file which was changed. Only non-``null`` on Linux and Windows. May be ``null``
even on those platforms.
#. ``int flags`` - one of ``UV_RENAME`` or ``UV_CHANGE``, or a bitwise OR of
both.
#. ``int status`` - Currently 0.
In our example we simply print the arguments and run the command using
``system()``.
.. rubric:: onchange/main.c - file change notification callback
.. literalinclude:: ../../code/onchange/main.c
:linenos:
:lines: 9-24
----
.. [#fsnotify] inotify on Linux, FSEvents on Darwin, kqueue on BSDs,
ReadDirectoryChangesW on Windows, event ports on Solaris, unsupported on Cygwin
.. [#] see :ref:`pipes`

View File

@ -0,0 +1,75 @@
Introduction
============
This 'book' is a small set of tutorials about using libuv_ as
a high performance evented I/O library which offers the same API on Windows and Unix.
It is meant to cover the main areas of libuv, but is not a comprehensive
reference discussing every function and data structure. The `official libuv
documentation`_ may be consulted for full details.
.. _official libuv documentation: http://docs.libuv.org/en/v1.x/
This book is still a work in progress, so sections may be incomplete, but
I hope you will enjoy it as it grows.
Who this book is for
--------------------
If you are reading this book, you are either:
1) a systems programmer, creating low-level programs such as daemons or network
services and clients. You have found that the event loop approach is well
suited for your application and decided to use libuv.
2) a node.js module writer, who wants to wrap platform APIs
written in C or C++ with a set of (a)synchronous APIs that are exposed to
JavaScript. You will use libuv purely in the context of node.js. For
this you will require some other resources as the book does not cover parts
specific to v8/node.js.
This book assumes that you are comfortable with the C programming language.
Background
----------
The node.js_ project began in 2009 as a JavaScript environment decoupled
from the browser. Using Google's V8_ and Marc Lehmann's libev_, node.js
combined a model of I/O -- evented -- with a language that was well suited to
the style of programming; due to the way it had been shaped by browsers. As
node.js grew in popularity, it was important to make it work on Windows, but
libev ran only on Unix. The Windows equivalent of kernel event notification
mechanisms like kqueue or (e)poll is IOCP. libuv was an abstraction around libev
or IOCP depending on the platform, providing users an API based on libev.
In the node-v0.9.0 version of libuv `libev was removed`_.
Since then libuv has continued to mature and become a high quality standalone
library for system programming. Users outside of node.js include Mozilla's
Rust_ programming language, and a variety_ of language bindings.
This book and the code is based on libuv version `v1.3.0`_.
Code
----
All the code from this book is included as part of the source of the book on
Github. `Clone`_/`Download`_ the book, then build libuv::
cd libuv
./autogen.sh
./configure
make
There is no need to ``make install``. To build the examples run ``make`` in the
``code/`` directory.
.. _Clone: https://github.com/nikhilm/uvbook
.. _Download: https://github.com/nikhilm/uvbook/downloads
.. _v1.3.0: https://github.com/libuv/libuv/tags
.. _V8: https://v8.dev
.. _libev: http://software.schmorp.de/pkg/libev.html
.. _libuv: https://github.com/libuv/libuv
.. _node.js: https://www.nodejs.org
.. _libev was removed: https://github.com/joyent/libuv/issues/485
.. _Rust: https://www.rust-lang.org
.. _variety: https://github.com/libuv/libuv/wiki/Projects-that-use-libuv

250
deps/libuv/docs/src/guide/networking.rst vendored Normal file
View File

@ -0,0 +1,250 @@
Networking
==========
Networking in libuv is not much different from directly using the BSD socket
interface, some things are easier, all are non-blocking, but the concepts stay
the same. In addition libuv offers utility functions to abstract the annoying,
repetitive and low-level tasks like setting up sockets using the BSD socket
structures, DNS lookup, and tweaking various socket parameters.
The ``uv_tcp_t`` and ``uv_udp_t`` structures are used for network I/O.
.. NOTE::
The code samples in this chapter exist to show certain libuv APIs. They are
not examples of good quality code. They leak memory and don't always close
connections properly.
TCP
---
TCP is a connection oriented, stream protocol and is therefore based on the
libuv streams infrastructure.
Server
++++++
Server sockets proceed by:
1. ``uv_tcp_init`` the TCP handle.
2. ``uv_tcp_bind`` it.
3. Call ``uv_listen`` on the handle to have a callback invoked whenever a new
connection is established by a client.
4. Use ``uv_accept`` to accept the connection.
5. Use :ref:`stream operations <buffers-and-streams>` to communicate with the
client.
Here is a simple echo server
.. rubric:: tcp-echo-server/main.c - The listen socket
.. literalinclude:: ../../code/tcp-echo-server/main.c
:linenos:
:lines: 68-
:emphasize-lines: 4-5,7-10
You can see the utility function ``uv_ip4_addr`` being used to convert from
a human readable IP address, port pair to the sockaddr_in structure required by
the BSD socket APIs. The reverse can be obtained using ``uv_ip4_name``.
.. NOTE::
There are ``uv_ip6_*`` analogues for the ip4 functions.
Most of the setup functions are synchronous since they are CPU-bound.
``uv_listen`` is where we return to libuv's callback style. The second
arguments is the backlog queue -- the maximum length of queued connections.
When a connection is initiated by clients, the callback is required to set up
a handle for the client socket and associate the handle using ``uv_accept``.
In this case we also establish interest in reading from this stream.
.. rubric:: tcp-echo-server/main.c - Accepting the client
.. literalinclude:: ../../code/tcp-echo-server/main.c
:linenos:
:lines: 51-66
:emphasize-lines: 9-10
The remaining set of functions is very similar to the streams example and can
be found in the code. Just remember to call ``uv_close`` when the socket isn't
required. This can be done even in the ``uv_listen`` callback if you are not
interested in accepting the connection.
Client
++++++
Where you do bind/listen/accept on the server, on the client side it's simply
a matter of calling ``uv_tcp_connect``. The same ``uv_connect_cb`` style
callback of ``uv_listen`` is used by ``uv_tcp_connect``. Try::
uv_tcp_t* socket = (uv_tcp_t*)malloc(sizeof(uv_tcp_t));
uv_tcp_init(loop, socket);
uv_connect_t* connect = (uv_connect_t*)malloc(sizeof(uv_connect_t));
struct sockaddr_in dest;
uv_ip4_addr("127.0.0.1", 80, &dest);
uv_tcp_connect(connect, socket, (const struct sockaddr*)&dest, on_connect);
where ``on_connect`` will be called after the connection is established. The
callback receives the ``uv_connect_t`` struct, which has a member ``.handle``
pointing to the socket.
UDP
---
The `User Datagram Protocol`_ offers connectionless, unreliable network
communication. Hence libuv doesn't offer a stream. Instead libuv provides
non-blocking UDP support via the `uv_udp_t` handle (for receiving) and
`uv_udp_send_t` request (for sending) and related functions. That said, the
actual API for reading/writing is very similar to normal stream reads. To look
at how UDP can be used, the example shows the first stage of obtaining an IP
address from a `DHCP`_ server -- DHCP Discover.
.. note::
You will have to run `udp-dhcp` as **root** since it uses well known port
numbers below 1024.
.. rubric:: udp-dhcp/main.c - Setup and send UDP packets
.. literalinclude:: ../../code/udp-dhcp/main.c
:linenos:
:lines: 7-11,104-
:emphasize-lines: 8,10-11,17-18,21
.. note::
The IP address ``0.0.0.0`` is used to bind to all interfaces. The IP
address ``255.255.255.255`` is a broadcast address meaning that packets
will be sent to all interfaces on the subnet. port ``0`` means that the OS
randomly assigns a port.
First we setup the receiving socket to bind on all interfaces on port 68 (DHCP
client) and start a read on it. This will read back responses from any DHCP
server that replies. We use the UV_UDP_REUSEADDR flag to play nice with any
other system DHCP clients that are running on this computer on the same port.
Then we setup a similar send socket and use ``uv_udp_send`` to send
a *broadcast message* on port 67 (DHCP server).
It is **necessary** to set the broadcast flag, otherwise you will get an
``EACCES`` error [#]_. The exact message being sent is not relevant to this
book and you can study the code if you are interested. As usual the read and
write callbacks will receive a status code of < 0 if something went wrong.
Since UDP sockets are not connected to a particular peer, the read callback
receives an extra parameter about the sender of the packet.
``nread`` may be zero if there is no more data to be read. If ``addr`` is NULL,
it indicates there is nothing to read (the callback shouldn't do anything), if
not NULL, it indicates that an empty datagram was received from the host at
``addr``. The ``flags`` parameter may be ``UV_UDP_PARTIAL`` if the buffer
provided by your allocator was not large enough to hold the data. *In this case
the OS will discard the data that could not fit* (That's UDP for you!).
.. rubric:: udp-dhcp/main.c - Reading packets
.. literalinclude:: ../../code/udp-dhcp/main.c
:linenos:
:lines: 17-40
:emphasize-lines: 1,23
UDP Options
+++++++++++
Time-to-live
~~~~~~~~~~~~
The TTL of packets sent on the socket can be changed using ``uv_udp_set_ttl``.
IPv6 stack only
~~~~~~~~~~~~~~~
IPv6 sockets can be used for both IPv4 and IPv6 communication. If you want to
restrict the socket to IPv6 only, pass the ``UV_UDP_IPV6ONLY`` flag to
``uv_udp_bind`` [#]_.
Multicast
~~~~~~~~~
A socket can (un)subscribe to a multicast group using:
.. code::block:: c
int uv_udp_set_membership(uv_udp_t* handle, const char* multicast_addr, const char* interface_addr, uv_membership membership);
where ``membership`` is ``UV_JOIN_GROUP`` or ``UV_LEAVE_GROUP``.
The concepts of multicasting are nicely explained in `this guide`_.
.. _this guide: https://www.tldp.org/HOWTO/Multicast-HOWTO-2.html
Local loopback of multicast packets is enabled by default [#]_, use
``uv_udp_set_multicast_loop`` to switch it off.
The packet time-to-live for multicast packets can be changed using
``uv_udp_set_multicast_ttl``.
Querying DNS
------------
libuv provides asynchronous DNS resolution. For this it provides its own
``getaddrinfo`` replacement [#]_. In the callback you can
perform normal socket operations on the retrieved addresses. Let's connect to
Freenode to see an example of DNS resolution.
.. rubric:: dns/main.c
.. literalinclude:: ../../code/dns/main.c
:linenos:
:lines: 61-
:emphasize-lines: 12
If ``uv_getaddrinfo`` returns non-zero, something went wrong in the setup and
your callback won't be invoked at all. All arguments can be freed immediately
after ``uv_getaddrinfo`` returns. The `hostname`, `servname` and `hints`
structures are documented in `the getaddrinfo man page <getaddrinfo_>`_. The
callback can be ``NULL`` in which case the function will run synchronously.
In the resolver callback, you can pick any IP from the linked list of ``struct
addrinfo(s)``. This also demonstrates ``uv_tcp_connect``. It is necessary to
call ``uv_freeaddrinfo`` in the callback.
.. rubric:: dns/main.c
.. literalinclude:: ../../code/dns/main.c
:linenos:
:lines: 42-60
:emphasize-lines: 8,16
libuv also provides the inverse `uv_getnameinfo`_.
.. _uv_getnameinfo: http://docs.libuv.org/en/v1.x/dns.html#c.uv_getnameinfo
Network interfaces
------------------
Information about the system's network interfaces can be obtained through libuv
using ``uv_interface_addresses``. This simple program just prints out all the
interface details so you get an idea of the fields that are available. This is
useful to allow your service to bind to IP addresses when it starts.
.. rubric:: interfaces/main.c
.. literalinclude:: ../../code/interfaces/main.c
:linenos:
:emphasize-lines: 9,17
``is_internal`` is true for loopback interfaces. Note that if a physical
interface has multiple IPv4/IPv6 addresses, the name will be reported multiple
times, with each address being reported once.
.. _c-ares: https://c-ares.haxx.se
.. _getaddrinfo: https://man7.org/linux/man-pages/man3/getaddrinfo.3.html
.. _User Datagram Protocol: https://en.wikipedia.org/wiki/User_Datagram_Protocol
.. _DHCP: https://tools.ietf.org/html/rfc2131
----
.. [#] https://beej.us/guide/bgnet/html/#broadcast-packetshello-world
.. [#] on Windows only supported on Windows Vista and later.
.. [#] https://www.tldp.org/HOWTO/Multicast-HOWTO-6.html#ss6.1
.. [#] libuv use the system ``getaddrinfo`` in the libuv threadpool. libuv
v0.8.0 and earlier also included c-ares_ as an alternative, but this has been
removed in v0.9.0.

406
deps/libuv/docs/src/guide/processes.rst vendored Normal file
View File

@ -0,0 +1,406 @@
Processes
=========
libuv offers considerable child process management, abstracting the platform
differences and allowing communication with the child process using streams or
named pipes.
A common idiom in Unix is for every process to do one thing and do it well. In
such a case, a process often uses multiple child processes to achieve tasks
(similar to using pipes in shells). A multi-process model with messages
may also be easier to reason about compared to one with threads and shared
memory.
A common refrain against event-based programs is that they cannot take
advantage of multiple cores in modern computers. In a multi-threaded program
the kernel can perform scheduling and assign different threads to different
cores, improving performance. But an event loop has only one thread. The
workaround can be to launch multiple processes instead, with each process
running an event loop, and each process getting assigned to a separate CPU
core.
Spawning child processes
------------------------
The simplest case is when you simply want to launch a process and know when it
exits. This is achieved using ``uv_spawn``.
.. rubric:: spawn/main.c
.. literalinclude:: ../../code/spawn/main.c
:linenos:
:lines: 6-8,15-
:emphasize-lines: 11,13-17
.. NOTE::
``options`` is implicitly initialized with zeros since it is a global
variable. If you change ``options`` to a local variable, remember to
initialize it to null out all unused fields::
uv_process_options_t options = {0};
The ``uv_process_t`` struct only acts as the handle, all options are set via
``uv_process_options_t``. To simply launch a process, you need to set only the
``file`` and ``args`` fields. ``file`` is the program to execute. Since
``uv_spawn`` uses :man:`execvp(3)` internally, there is no need to supply the full
path. Finally as per underlying conventions, **the arguments array has to be
one larger than the number of arguments, with the last element being NULL**.
After the call to ``uv_spawn``, ``uv_process_t.pid`` will contain the process
ID of the child process.
The exit callback will be invoked with the *exit status* and the type of *signal*
which caused the exit.
.. rubric:: spawn/main.c
.. literalinclude:: ../../code/spawn/main.c
:linenos:
:lines: 9-12
:emphasize-lines: 3
It is **required** to close the process watcher after the process exits.
Changing process parameters
---------------------------
Before the child process is launched you can control the execution environment
using fields in ``uv_process_options_t``.
Change execution directory
++++++++++++++++++++++++++
Set ``uv_process_options_t.cwd`` to the corresponding directory.
Set environment variables
+++++++++++++++++++++++++
``uv_process_options_t.env`` is a null-terminated array of strings, each of the
form ``VAR=VALUE`` used to set up the environment variables for the process. Set
this to ``NULL`` to inherit the environment from the parent (this) process.
Option flags
++++++++++++
Setting ``uv_process_options_t.flags`` to a bitwise OR of the following flags,
modifies the child process behaviour:
* ``UV_PROCESS_SETUID`` - sets the child's execution user ID to ``uv_process_options_t.uid``.
* ``UV_PROCESS_SETGID`` - sets the child's execution group ID to ``uv_process_options_t.gid``.
Changing the UID/GID is only supported on Unix, ``uv_spawn`` will fail on
Windows with ``UV_ENOTSUP``.
* ``UV_PROCESS_WINDOWS_VERBATIM_ARGUMENTS`` - No quoting or escaping of
``uv_process_options_t.args`` is done on Windows. Ignored on Unix.
* ``UV_PROCESS_DETACHED`` - Starts the child process in a new session, which
will keep running after the parent process exits. See example below.
Detaching processes
-------------------
Passing the flag ``UV_PROCESS_DETACHED`` can be used to launch daemons, or
child processes which are independent of the parent so that the parent exiting
does not affect it.
.. rubric:: detach/main.c
.. literalinclude:: ../../code/detach/main.c
:linenos:
:lines: 9-30
:emphasize-lines: 12,19
Just remember that the handle is still monitoring the child, so your program
won't exit. Use ``uv_unref()`` if you want to be more *fire-and-forget*.
Sending signals to processes
----------------------------
libuv wraps the standard ``kill(2)`` system call on Unix and implements one
with similar semantics on Windows, with *one caveat*: all of ``SIGTERM``,
``SIGINT`` and ``SIGKILL``, lead to termination of the process. The signature
of ``uv_kill`` is::
uv_err_t uv_kill(int pid, int signum);
For processes started using libuv, you may use ``uv_process_kill`` instead,
which accepts the ``uv_process_t`` watcher as the first argument, rather than
the pid. In this case, **remember to call** ``uv_close`` on the watcher.
Signals
-------
libuv provides wrappers around Unix signals with `some Windows support
<http://docs.libuv.org/en/v1.x/signal.html#signal>`_ as well.
Use ``uv_signal_init()`` to initialize
a handle and associate it with a loop. To listen for particular signals on
that handler, use ``uv_signal_start()`` with the handler function. Each handler
can only be associated with one signal number, with subsequent calls to
``uv_signal_start()`` overwriting earlier associations. Use ``uv_signal_stop()`` to
stop watching. Here is a small example demonstrating the various possibilities:
.. rubric:: signal/main.c
.. literalinclude:: ../../code/signal/main.c
:linenos:
:emphasize-lines: 17-18,27-28
.. NOTE::
``uv_run(loop, UV_RUN_NOWAIT)`` is similar to ``uv_run(loop, UV_RUN_ONCE)``
in that it will process only one event. UV_RUN_ONCE blocks if there are no
pending events, while UV_RUN_NOWAIT will return immediately. We use NOWAIT
so that one of the loops isn't starved because the other one has no pending
activity.
Send ``SIGUSR1`` to the process, and you'll find the handler being invoked
4 times, one for each ``uv_signal_t``. The handler just stops each handle,
so that the program exits. This sort of dispatch to all handlers is very
useful. A server using multiple event loops could ensure that all data was
safely saved before termination, simply by every loop adding a watcher for
``SIGINT``.
Child Process I/O
-----------------
A normal, newly spawned process has its own set of file descriptors, with 0,
1 and 2 being ``stdin``, ``stdout`` and ``stderr`` respectively. Sometimes you
may want to share file descriptors with the child. For example, perhaps your
applications launches a sub-command and you want any errors to go in the log
file, but ignore ``stdout``. For this you'd like to have ``stderr`` of the
child be the same as the stderr of the parent. In this case, libuv supports
*inheriting* file descriptors. In this sample, we invoke the test program,
which is:
.. rubric:: proc-streams/test.c
.. literalinclude:: ../../code/proc-streams/test.c
The actual program ``proc-streams`` runs this while sharing only ``stderr``.
The file descriptors of the child process are set using the ``stdio`` field in
``uv_process_options_t``. First set the ``stdio_count`` field to the number of
file descriptors being set. ``uv_process_options_t.stdio`` is an array of
``uv_stdio_container_t``, which is:
.. code-block:: c
typedef struct uv_stdio_container_s {
uv_stdio_flags flags;
union {
uv_stream_t* stream;
int fd;
} data;
} uv_stdio_container_t;
where flags can have several values. Use ``UV_IGNORE`` if it isn't going to be
used. If the first three ``stdio`` fields are marked as ``UV_IGNORE`` they'll
redirect to ``/dev/null``.
Since we want to pass on an existing descriptor, we'll use ``UV_INHERIT_FD``.
Then we set the ``fd`` to ``stderr``.
.. rubric:: proc-streams/main.c
.. literalinclude:: ../../code/proc-streams/main.c
:linenos:
:lines: 15-17,27-
:emphasize-lines: 6,10,11,12
If you run ``proc-stream`` you'll see that only the line "This is stderr" will
be displayed. Try marking ``stdout`` as being inherited and see the output.
It is dead simple to apply this redirection to streams. By setting ``flags``
to ``UV_INHERIT_STREAM`` and setting ``data.stream`` to the stream in the
parent process, the child process can treat that stream as standard I/O. This
can be used to implement something like CGI_.
.. _CGI: https://en.wikipedia.org/wiki/Common_Gateway_Interface
A sample CGI script/executable is:
.. rubric:: cgi/tick.c
.. literalinclude:: ../../code/cgi/tick.c
The CGI server combines the concepts from this chapter and :doc:`networking` so
that every client is sent ten ticks after which that connection is closed.
.. rubric:: cgi/main.c
.. literalinclude:: ../../code/cgi/main.c
:linenos:
:lines: 49-63
:emphasize-lines: 10
Here we simply accept the TCP connection and pass on the socket (*stream*) to
``invoke_cgi_script``.
.. rubric:: cgi/main.c
.. literalinclude:: ../../code/cgi/main.c
:linenos:
:lines: 16, 25-45
:emphasize-lines: 8-9,18,20
The ``stdout`` of the CGI script is set to the socket so that whatever our tick
script prints, gets sent to the client. By using processes, we can offload the
read/write buffering to the operating system, so in terms of convenience this
is great. Just be warned that creating processes is a costly task.
.. _pipes:
Parent-child IPC
----------------
A parent and child can have one or two way communication over a pipe created by
settings ``uv_stdio_container_t.flags`` to a bit-wise combination of
``UV_CREATE_PIPE`` and ``UV_READABLE_PIPE`` or ``UV_WRITABLE_PIPE``. The
read/write flag is from the perspective of the child process. In this case,
the ``uv_stream_t* stream`` field must be set to point to an initialized,
unopened ``uv_pipe_t`` instance.
New stdio Pipes
+++++++++++++++
The ``uv_pipe_t`` structure represents more than just `pipe(7)`_ (or ``|``),
but supports any streaming file-like objects. On Windows, the only object of
that description is the `Named Pipe`_. On Unix, this could be any of `Unix
Domain Socket`_, or derived from `mkfifo(1)`_, or it could actually be a
`pipe(7)`_. When ``uv_spawn`` initializes a ``uv_pipe_t`` due to the
`UV_CREATE_PIPE` flag, it opts for creating a `socketpair(2)`_.
This is intended for the purpose of allowing multiple libuv processes to
communicate with IPC. This is discussed below.
.. _pipe(7): https://man7.org/linux/man-pages/man7/pipe.7.html
.. _mkfifo(1): https://man7.org/linux/man-pages/man1/mkfifo.1.html
.. _socketpair(2): https://man7.org/linux/man-pages/man2/socketpair.2.html
.. _Unix Domain Socket: https://man7.org/linux/man-pages/man7/unix.7.html
.. _Named Pipe: https://docs.microsoft.com/en-us/windows/win32/ipc/named-pipes
Arbitrary process IPC
+++++++++++++++++++++
Since domain sockets [#]_ can have a well known name and a location in the
file-system they can be used for IPC between unrelated processes. The D-BUS_
system used by open source desktop environments uses domain sockets for event
notification. Various applications can then react when a contact comes online
or new hardware is detected. The MySQL server also runs a domain socket on
which clients can interact with it.
.. _D-BUS: https://www.freedesktop.org/wiki/Software/dbus
When using domain sockets, a client-server pattern is usually followed with the
creator/owner of the socket acting as the server. After the initial setup,
messaging is no different from TCP, so we'll re-use the echo server example.
.. rubric:: pipe-echo-server/main.c
.. literalinclude:: ../../code/pipe-echo-server/main.c
:linenos:
:lines: 70-
:emphasize-lines: 5,10,14
We name the socket ``echo.sock`` which means it will be created in the local
directory. This socket now behaves no different from TCP sockets as far as
the stream API is concerned. You can test this server using `socat`_::
$ socat - /path/to/socket
A client which wants to connect to a domain socket will use::
void uv_pipe_connect(uv_connect_t *req, uv_pipe_t *handle, const char *name, uv_connect_cb cb);
where ``name`` will be ``echo.sock`` or similar. On Unix systems, ``name`` must
point to a valid file (e.g. ``/tmp/echo.sock``). On Windows, ``name`` follows a
``\\?\pipe\echo.sock`` format.
.. _socat: http://www.dest-unreach.org/socat/
Sending file descriptors over pipes
+++++++++++++++++++++++++++++++++++
The cool thing about domain sockets is that file descriptors can be exchanged
between processes by sending them over a domain socket. This allows processes
to hand off their I/O to other processes. Applications include load-balancing
servers, worker processes and other ways to make optimum use of CPU. libuv only
supports sending **TCP sockets or other pipes** over pipes for now.
To demonstrate, we will look at a echo server implementation that hands of
clients to worker processes in a round-robin fashion. This program is a bit
involved, and while only snippets are included in the book, it is recommended
to read the full code to really understand it.
The worker process is quite simple, since the file-descriptor is handed over to
it by the master.
.. rubric:: multi-echo-server/worker.c
.. literalinclude:: ../../code/multi-echo-server/worker.c
:linenos:
:lines: 7-9,81-
:emphasize-lines: 6-8
``queue`` is the pipe connected to the master process on the other end, along
which new file descriptors get sent. It is important to set the ``ipc``
argument of ``uv_pipe_init`` to 1 to indicate this pipe will be used for
inter-process communication! Since the master will write the file handle to the
standard input of the worker, we connect the pipe to ``stdin`` using
``uv_pipe_open``.
.. rubric:: multi-echo-server/worker.c
.. literalinclude:: ../../code/multi-echo-server/worker.c
:linenos:
:lines: 51-79
:emphasize-lines: 10,15,20
First we call ``uv_pipe_pending_count()`` to ensure that a handle is available
to read out. If your program could deal with different types of handles,
``uv_pipe_pending_type()`` can be used to determine the type.
Although ``accept`` seems odd in this code, it actually makes sense. What
``accept`` traditionally does is get a file descriptor (the client) from
another file descriptor (The listening socket). Which is exactly what we do
here. Fetch the file descriptor (``client``) from ``queue``. From this point
the worker does standard echo server stuff.
Turning now to the master, let's take a look at how the workers are launched to
allow load balancing.
.. rubric:: multi-echo-server/main.c
.. literalinclude:: ../../code/multi-echo-server/main.c
:linenos:
:lines: 9-13
The ``child_worker`` structure wraps the process, and the pipe between the
master and the individual process.
.. rubric:: multi-echo-server/main.c
.. literalinclude:: ../../code/multi-echo-server/main.c
:linenos:
:lines: 51,61-95
:emphasize-lines: 17,20-21
In setting up the workers, we use the nifty libuv function ``uv_cpu_info`` to
get the number of CPUs so we can launch an equal number of workers. Again it is
important to initialize the pipe acting as the IPC channel with the third
argument as 1. We then indicate that the child process' ``stdin`` is to be
a readable pipe (from the point of view of the child). Everything is
straightforward till here. The workers are launched and waiting for file
descriptors to be written to their standard input.
It is in ``on_new_connection`` (the TCP infrastructure is initialized in
``main()``), that we accept the client socket and pass it along to the next
worker in the round-robin.
.. rubric:: multi-echo-server/main.c
.. literalinclude:: ../../code/multi-echo-server/main.c
:linenos:
:lines: 31-49
:emphasize-lines: 9,12-13
The ``uv_write2`` call handles all the abstraction and it is simply a matter of
passing in the handle (``client``) as the right argument. With this our
multi-process echo server is operational.
Thanks to Kyle for `pointing out`_ that ``uv_write2()`` requires a non-empty
buffer even when sending handles.
.. _pointing out: https://github.com/nikhilm/uvbook/issues/56
----
.. [#] In this section domain sockets stands in for named pipes on Windows as
well.

385
deps/libuv/docs/src/guide/threads.rst vendored Normal file
View File

@ -0,0 +1,385 @@
Threads
=======
Wait a minute? Why are we on threads? Aren't event loops supposed to be **the
way** to do *web-scale programming*? Well... no. Threads are still the medium in
which processors do their jobs. Threads are therefore mighty useful sometimes, even
though you might have to wade through various synchronization primitives.
Threads are used internally to fake the asynchronous nature of all of the system
calls. libuv also uses threads to allow you, the application, to perform a task
asynchronously that is actually blocking, by spawning a thread and collecting
the result when it is done.
Today there are two predominant thread libraries: the Windows threads
implementation and POSIX's :man:`pthreads(7)`. libuv's thread API is analogous to
the pthreads API and often has similar semantics.
A notable aspect of libuv's thread facilities is that it is a self contained
section within libuv. Whereas other features intimately depend on the event
loop and callback principles, threads are complete agnostic, they block as
required, signal errors directly via return values, and, as shown in the
:ref:`first example <thread-create-example>`, don't even require a running
event loop.
libuv's thread API is also very limited since the semantics and syntax of
threads are different on all platforms, with different levels of completeness.
This chapter makes the following assumption: **There is only one event loop,
running in one thread (the main thread)**. No other thread interacts
with the event loop (except using ``uv_async_send``).
Core thread operations
----------------------
There isn't much here, you just start a thread using ``uv_thread_create()`` and
wait for it to close using ``uv_thread_join()``.
.. _thread-create-example:
.. rubric:: thread-create/main.c
.. literalinclude:: ../../code/thread-create/main.c
:linenos:
:lines: 26-36
:emphasize-lines: 3-7
.. tip::
``uv_thread_t`` is just an alias for ``pthread_t`` on Unix, but this is an
implementation detail, avoid depending on it to always be true.
The second parameter is the function which will serve as the entry point for
the thread, the last parameter is a ``void *`` argument which can be used to pass
custom parameters to the thread. The function ``hare`` will now run in a separate
thread, scheduled pre-emptively by the operating system:
.. rubric:: thread-create/main.c
.. literalinclude:: ../../code/thread-create/main.c
:linenos:
:lines: 6-14
:emphasize-lines: 2
Unlike ``pthread_join()`` which allows the target thread to pass back a value to
the calling thread using a second parameter, ``uv_thread_join()`` does not. To
send values use :ref:`inter-thread-communication`.
Synchronization Primitives
--------------------------
This section is purposely spartan. This book is not about threads, so I only
catalogue any surprises in the libuv APIs here. For the rest you can look at
the :man:`pthreads(7)` man pages.
Mutexes
~~~~~~~
The mutex functions are a **direct** map to the pthread equivalents.
.. rubric:: libuv mutex functions
.. code-block:: c
int uv_mutex_init(uv_mutex_t* handle);
int uv_mutex_init_recursive(uv_mutex_t* handle);
void uv_mutex_destroy(uv_mutex_t* handle);
void uv_mutex_lock(uv_mutex_t* handle);
int uv_mutex_trylock(uv_mutex_t* handle);
void uv_mutex_unlock(uv_mutex_t* handle);
The ``uv_mutex_init()``, ``uv_mutex_init_recursive()`` and ``uv_mutex_trylock()``
functions will return 0 on success, and an error code otherwise.
If `libuv` has been compiled with debugging enabled, ``uv_mutex_destroy()``,
``uv_mutex_lock()`` and ``uv_mutex_unlock()`` will ``abort()`` on error.
Similarly ``uv_mutex_trylock()`` will abort if the error is anything *other
than* ``EAGAIN`` or ``EBUSY``.
Recursive mutexes are supported, but you should not rely on them. Also, they
should not be used with ``uv_cond_t`` variables.
The default BSD mutex implementation will raise an error if a thread which has
locked a mutex attempts to lock it again. For example, a construct like::
uv_mutex_init(a_mutex);
uv_mutex_lock(a_mutex);
uv_thread_create(thread_id, entry, (void *)a_mutex);
uv_mutex_lock(a_mutex);
// more things here
can be used to wait until another thread initializes some stuff and then
unlocks ``a_mutex`` but will lead to your program crashing if in debug mode, or
return an error in the second call to ``uv_mutex_lock()``.
.. note::
Mutexes on Windows are always recursive.
Locks
~~~~~
Read-write locks are a more granular access mechanism. Two readers can access
shared memory at the same time. A writer may not acquire the lock when it is
held by a reader. A reader or writer may not acquire a lock when a writer is
holding it. Read-write locks are frequently used in databases. Here is a toy
example.
.. rubric:: locks/main.c - simple rwlocks
.. literalinclude:: ../../code/locks/main.c
:linenos:
:emphasize-lines: 13,16,27,31,42,55
Run this and observe how the readers will sometimes overlap. In case of
multiple writers, schedulers will usually give them higher priority, so if you
add two writers, you'll see that both writers tend to finish first before the
readers get a chance again.
We also use barriers in the above example so that the main thread can wait for
all readers and writers to indicate they have ended.
Others
~~~~~~
libuv also supports semaphores_, `condition variables`_ and barriers_ with APIs
very similar to their pthread counterparts.
.. _semaphores: https://en.wikipedia.org/wiki/Semaphore_(programming)
.. _condition variables: https://en.wikipedia.org/wiki/Monitor_(synchronization)#Condition_variables_2
.. _barriers: https://en.wikipedia.org/wiki/Barrier_(computer_science)
In addition, libuv provides a convenience function ``uv_once()``. Multiple
threads can attempt to call ``uv_once()`` with a given guard and a function
pointer, **only the first one will win, the function will be called once and
only once**::
/* Initialize guard */
static uv_once_t once_only = UV_ONCE_INIT;
int i = 0;
void increment() {
i++;
}
void thread1() {
/* ... work */
uv_once(once_only, increment);
}
void thread2() {
/* ... work */
uv_once(once_only, increment);
}
int main() {
/* ... spawn threads */
}
After all threads are done, ``i == 1``.
.. _libuv-work-queue:
libuv v0.11.11 onwards also added a ``uv_key_t`` struct and api_ for
thread-local storage.
.. _api: http://docs.libuv.org/en/v1.x/threading.html#thread-local-storage
libuv work queue
----------------
``uv_queue_work()`` is a convenience function that allows an application to run
a task in a separate thread, and have a callback that is triggered when the
task is done. A seemingly simple function, what makes ``uv_queue_work()``
tempting is that it allows potentially any third-party libraries to be used
with the event-loop paradigm. When you use event loops, it is *imperative to
make sure that no function which runs periodically in the loop thread blocks
when performing I/O or is a serious CPU hog*, because this means that the loop
slows down and events are not being handled at full capacity.
However, a lot of existing code out there features blocking functions (for example
a routine which performs I/O under the hood) to be used with threads if you
want responsiveness (the classic 'one thread per client' server model), and
getting them to play with an event loop library generally involves rolling your
own system of running the task in a separate thread. libuv just provides
a convenient abstraction for this.
Here is a simple example inspired by `node.js is cancer`_. We are going to
calculate fibonacci numbers, sleeping a bit along the way, but run it in
a separate thread so that the blocking and CPU bound task does not prevent the
event loop from performing other activities.
.. rubric:: queue-work/main.c - lazy fibonacci
.. literalinclude:: ../../code/queue-work/main.c
:linenos:
:lines: 17-29
The actual task function is simple, nothing to show that it is going to be
run in a separate thread. The ``uv_work_t`` structure is the clue. You can pass
arbitrary data through it using the ``void* data`` field and use it to
communicate to and from the thread. But be sure you are using proper locks if
you are changing things while both threads may be running.
The trigger is ``uv_queue_work``:
.. rubric:: queue-work/main.c
.. literalinclude:: ../../code/queue-work/main.c
:linenos:
:lines: 31-44
:emphasize-lines: 10
The thread function will be launched in a separate thread, passed the
``uv_work_t`` structure and once the function returns, the *after* function
will be called on the thread the event loop is running in. It will be passed
the same structure.
For writing wrappers to blocking libraries, a common :ref:`pattern <baton>`
is to use a baton to exchange data.
Since libuv version `0.9.4` an additional function, ``uv_cancel()``, is
available. This allows you to cancel tasks on the libuv work queue. Only tasks
that *are yet to be started* can be cancelled. If a task has *already started
executing, or it has finished executing*, ``uv_cancel()`` **will fail**.
``uv_cancel()`` is useful to cleanup pending tasks if the user requests
termination. For example, a music player may queue up multiple directories to
be scanned for audio files. If the user terminates the program, it should quit
quickly and not wait until all pending requests are run.
Let's modify the fibonacci example to demonstrate ``uv_cancel()``. We first set
up a signal handler for termination.
.. rubric:: queue-cancel/main.c
.. literalinclude:: ../../code/queue-cancel/main.c
:linenos:
:lines: 43-
When the user triggers the signal by pressing ``Ctrl+C`` we send
``uv_cancel()`` to all the workers. ``uv_cancel()`` will return ``0`` for those that are already executing or finished.
.. rubric:: queue-cancel/main.c
.. literalinclude:: ../../code/queue-cancel/main.c
:linenos:
:lines: 33-41
:emphasize-lines: 6
For tasks that do get cancelled successfully, the *after* function is called
with ``status`` set to ``UV_ECANCELED``.
.. rubric:: queue-cancel/main.c
.. literalinclude:: ../../code/queue-cancel/main.c
:linenos:
:lines: 28-31
:emphasize-lines: 2
``uv_cancel()`` can also be used with ``uv_fs_t`` and ``uv_getaddrinfo_t``
requests. For the filesystem family of functions, ``uv_fs_t.errorno`` will be
set to ``UV_ECANCELED``.
.. TIP::
A well designed program would have a way to terminate long running workers
that have already started executing. Such a worker could periodically check
for a variable that only the main process sets to signal termination.
.. _inter-thread-communication:
Inter-thread communication
--------------------------
Sometimes you want various threads to actually send each other messages *while*
they are running. For example you might be running some long duration task in
a separate thread (perhaps using ``uv_queue_work``) but want to notify progress
to the main thread. This is a simple example of having a download manager
informing the user of the status of running downloads.
.. rubric:: progress/main.c
.. literalinclude:: ../../code/progress/main.c
:linenos:
:lines: 7-8,35-
:emphasize-lines: 2,11
The async thread communication works *on loops* so although any thread can be
the message sender, only threads with libuv loops can be receivers (or rather
the loop is the receiver). libuv will invoke the callback (``print_progress``)
with the async watcher whenever it receives a message.
.. warning::
It is important to realize that since the message send is *async*, the callback
may be invoked immediately after ``uv_async_send`` is called in another
thread, or it may be invoked after some time. libuv may also combine
multiple calls to ``uv_async_send`` and invoke your callback only once. The
only guarantee that libuv makes is -- The callback function is called *at
least once* after the call to ``uv_async_send``. If you have no pending
calls to ``uv_async_send``, the callback won't be called. If you make two
or more calls, and libuv hasn't had a chance to run the callback yet, it
*may* invoke your callback *only once* for the multiple invocations of
``uv_async_send``. Your callback will never be called twice for just one
event.
.. rubric:: progress/main.c
.. literalinclude:: ../../code/progress/main.c
:linenos:
:lines: 10-24
:emphasize-lines: 7-8
In the download function, we modify the progress indicator and queue the message
for delivery with ``uv_async_send``. Remember: ``uv_async_send`` is also
non-blocking and will return immediately.
.. rubric:: progress/main.c
.. literalinclude:: ../../code/progress/main.c
:linenos:
:lines: 31-34
The callback is a standard libuv pattern, extracting the data from the watcher.
Finally it is important to remember to clean up the watcher.
.. rubric:: progress/main.c
.. literalinclude:: ../../code/progress/main.c
:linenos:
:lines: 26-29
:emphasize-lines: 3
After this example, which showed the abuse of the ``data`` field, bnoordhuis_
pointed out that using the ``data`` field is not thread safe, and
``uv_async_send()`` is actually only meant to wake up the event loop. Use
a mutex or rwlock to ensure accesses are performed in the right order.
.. note::
mutexes and rwlocks **DO NOT** work inside a signal handler, whereas
``uv_async_send`` does.
One use case where ``uv_async_send`` is required is when interoperating with
libraries that require thread affinity for their functionality. For example in
node.js, a v8 engine instance, contexts and its objects are bound to the thread
that the v8 instance was started in. Interacting with v8 data structures from
another thread can lead to undefined results. Now consider some node.js module
which binds a third party library. It may go something like this:
1. In node, the third party library is set up with a JavaScript callback to be
invoked for more information::
var lib = require('lib');
lib.on_progress(function() {
console.log("Progress");
});
lib.do();
// do other stuff
2. ``lib.do`` is supposed to be non-blocking but the third party lib is
blocking, so the binding uses ``uv_queue_work``.
3. The actual work being done in a separate thread wants to invoke the progress
callback, but cannot directly call into v8 to interact with JavaScript. So
it uses ``uv_async_send``.
4. The async callback, invoked in the main loop thread, which is the v8 thread,
then interacts with v8 to invoke the JavaScript callback.
----
.. _node.js is cancer: http://widgetsandshit.com/teddziuba/2011/10/node-js-is-cancer.html
.. _bnoordhuis: https://github.com/bnoordhuis

437
deps/libuv/docs/src/guide/utilities.rst vendored Normal file
View File

@ -0,0 +1,437 @@
Utilities
=========
This chapter catalogues tools and techniques which are useful for common tasks.
The `libev man page`_ already covers some patterns which can be adopted to
libuv through simple API changes. It also covers parts of the libuv API that
don't require entire chapters dedicated to them.
Timers
------
Timers invoke the callback after a certain time has elapsed since the timer was
started. libuv timers can also be set to invoke at regular intervals instead of
just once.
Simple use is to init a watcher and start it with a ``timeout``, and optional ``repeat``.
Timers can be stopped at any time.
.. code-block:: c
uv_timer_t timer_req;
uv_timer_init(loop, &timer_req);
uv_timer_start(&timer_req, callback, 5000, 2000);
will start a repeating timer, which first starts 5 seconds (the ``timeout``) after the execution
of ``uv_timer_start``, then repeats every 2 seconds (the ``repeat``). Use:
.. code-block:: c
uv_timer_stop(&timer_req);
to stop the timer. This can be used safely from within the callback as well.
The repeat interval can be modified at any time with::
uv_timer_set_repeat(uv_timer_t *timer, int64_t repeat);
which will take effect **when possible**. If this function is called from
a timer callback, it means:
* If the timer was non-repeating, the timer has already been stopped. Use
``uv_timer_start`` again.
* If the timer is repeating, the next timeout has already been scheduled, so
the old repeat interval will be used once more before the timer switches to
the new interval.
The utility function::
int uv_timer_again(uv_timer_t *)
applies **only to repeating timers** and is equivalent to stopping the timer
and then starting it with both initial ``timeout`` and ``repeat`` set to the
old ``repeat`` value. If the timer hasn't been started it fails (error code
``UV_EINVAL``) and returns -1.
An actual timer example is in the :ref:`reference count section
<reference-count>`.
.. _reference-count:
Event loop reference count
--------------------------
The event loop only runs as long as there are active handles. This system
works by having every handle increase the reference count of the event loop
when it is started and decreasing the reference count when stopped. It is also
possible to manually change the reference count of handles using::
void uv_ref(uv_handle_t*);
void uv_unref(uv_handle_t*);
These functions can be used to allow a loop to exit even when a watcher is
active or to use custom objects to keep the loop alive.
The latter can be used with interval timers. You might have a garbage collector
which runs every X seconds, or your network service might send a heartbeat to
others periodically, but you don't want to have to stop them along all clean
exit paths or error scenarios. Or you want the program to exit when all your
other watchers are done. In that case just unref the timer immediately after
creation so that if it is the only watcher running then ``uv_run`` will still
exit.
This is also used in node.js where some libuv methods are being bubbled up to
the JS API. A ``uv_handle_t`` (the superclass of all watchers) is created per
JS object and can be ref/unrefed.
.. rubric:: ref-timer/main.c
.. literalinclude:: ../../code/ref-timer/main.c
:linenos:
:lines: 5-8, 17-
:emphasize-lines: 9
We initialize the garbage collector timer, then immediately ``unref`` it.
Observe how after 9 seconds, when the fake job is done, the program
automatically exits, even though the garbage collector is still running.
Idler pattern
-------------
The callbacks of idle handles are invoked once per event loop. The idle
callback can be used to perform some very low priority activity. For example,
you could dispatch a summary of the daily application performance to the
developers for analysis during periods of idleness, or use the application's
CPU time to perform SETI calculations :) An idle watcher is also useful in
a GUI application. Say you are using an event loop for a file download. If the
TCP socket is still being established and no other events are present your
event loop will pause (**block**), which means your progress bar will freeze
and the user will face an unresponsive application. In such a case queue up and
idle watcher to keep the UI operational.
.. rubric:: idle-compute/main.c
.. literalinclude:: ../../code/idle-compute/main.c
:linenos:
:lines: 5-9, 34-
:emphasize-lines: 13
Here we initialize the idle watcher and queue it up along with the actual
events we are interested in. ``crunch_away`` will now be called repeatedly
until the user types something and presses Return. Then it will be interrupted
for a brief amount as the loop deals with the input data, after which it will
keep calling the idle callback again.
.. rubric:: idle-compute/main.c
.. literalinclude:: ../../code/idle-compute/main.c
:linenos:
:lines: 10-19
.. _baton:
Passing data to worker thread
-----------------------------
When using ``uv_queue_work`` you'll usually need to pass complex data through
to the worker thread. The solution is to use a ``struct`` and set
``uv_work_t.data`` to point to it. A slight variation is to have the
``uv_work_t`` itself as the first member of this struct (called a baton [#]_).
This allows cleaning up the work request and all the data in one free call.
.. code-block:: c
:linenos:
:emphasize-lines: 2
struct ftp_baton {
uv_work_t req;
char *host;
int port;
char *username;
char *password;
}
.. code-block:: c
:linenos:
:emphasize-lines: 2
ftp_baton *baton = (ftp_baton*) malloc(sizeof(ftp_baton));
baton->req.data = (void*) baton;
baton->host = strdup("my.webhost.com");
baton->port = 21;
// ...
uv_queue_work(loop, &baton->req, ftp_session, ftp_cleanup);
Here we create the baton and queue the task.
Now the task function can extract the data it needs:
.. code-block:: c
:linenos:
:emphasize-lines: 2, 12
void ftp_session(uv_work_t *req) {
ftp_baton *baton = (ftp_baton*) req->data;
fprintf(stderr, "Connecting to %s\n", baton->host);
}
void ftp_cleanup(uv_work_t *req) {
ftp_baton *baton = (ftp_baton*) req->data;
free(baton->host);
// ...
free(baton);
}
We then free the baton which also frees the watcher.
External I/O with polling
-------------------------
Usually third-party libraries will handle their own I/O, and keep track of
their sockets and other files internally. In this case it isn't possible to use
the standard stream I/O operations, but the library can still be integrated
into the libuv event loop. All that is required is that the library allow you
to access the underlying file descriptors and provide functions that process
tasks in small increments as decided by your application. Some libraries though
will not allow such access, providing only a standard blocking function which
will perform the entire I/O transaction and only then return. It is unwise to
use these in the event loop thread, use the :ref:`threadpool` instead. Of
course, this will also mean losing granular control on the library.
The ``uv_poll`` section of libuv simply watches file descriptors using the
operating system notification mechanism. In some sense, all the I/O operations
that libuv implements itself are also backed by ``uv_poll`` like code. Whenever
the OS notices a change of state in file descriptors being polled, libuv will
invoke the associated callback.
Here we will walk through a simple download manager that will use libcurl_ to
download files. Rather than give all control to libcurl, we'll instead be
using the libuv event loop, and use the non-blocking, async multi_ interface to
progress with the download whenever libuv notifies of I/O readiness.
.. _libcurl: https://curl.haxx.se/libcurl/
.. _multi: https://curl.haxx.se/libcurl/c/libcurl-multi.html
.. rubric:: uvwget/main.c - The setup
.. literalinclude:: ../../code/uvwget/main.c
:linenos:
:lines: 1-9,140-
:emphasize-lines: 7,21,24-25
The way each library is integrated with libuv will vary. In the case of
libcurl, we can register two callbacks. The socket callback ``handle_socket``
is invoked whenever the state of a socket changes and we have to start polling
it. ``start_timeout`` is called by libcurl to notify us of the next timeout
interval, after which we should drive libcurl forward regardless of I/O status.
This is so that libcurl can handle errors or do whatever else is required to
get the download moving.
Our downloader is to be invoked as::
$ ./uvwget [url1] [url2] ...
So we add each argument as an URL
.. rubric:: uvwget/main.c - Adding urls
.. literalinclude:: ../../code/uvwget/main.c
:linenos:
:lines: 39-56
:emphasize-lines: 13-14
We let libcurl directly write the data to a file, but much more is possible if
you so desire.
``start_timeout`` will be called immediately the first time by libcurl, so
things are set in motion. This simply starts a libuv `timer <#timers>`_ which
drives ``curl_multi_socket_action`` with ``CURL_SOCKET_TIMEOUT`` whenever it
times out. ``curl_multi_socket_action`` is what drives libcurl, and what we
call whenever sockets change state. But before we go into that, we need to poll
on sockets whenever ``handle_socket`` is called.
.. rubric:: uvwget/main.c - Setting up polling
.. literalinclude:: ../../code/uvwget/main.c
:linenos:
:lines: 102-140
:emphasize-lines: 9,11,15,21,24
We are interested in the socket fd ``s``, and the ``action``. For every socket
we create a ``uv_poll_t`` handle if it doesn't exist, and associate it with the
socket using ``curl_multi_assign``. This way ``socketp`` points to it whenever
the callback is invoked.
In the case that the download is done or fails, libcurl requests removal of the
poll. So we stop and free the poll handle.
Depending on what events libcurl wishes to watch for, we start polling with
``UV_READABLE`` or ``UV_WRITABLE``. Now libuv will invoke the poll callback
whenever the socket is ready for reading or writing. Calling ``uv_poll_start``
multiple times on the same handle is acceptable, it will just update the events
mask with the new value. ``curl_perform`` is the crux of this program.
.. rubric:: uvwget/main.c - Driving libcurl.
.. literalinclude:: ../../code/uvwget/main.c
:linenos:
:lines: 81-95
:emphasize-lines: 2,6-7,12
The first thing we do is to stop the timer, since there has been some progress
in the interval. Then depending on what event triggered the callback, we set
the correct flags. Then we call ``curl_multi_socket_action`` with the socket
that progressed and the flags informing about what events happened. At this
point libcurl does all of its internal tasks in small increments, and will
attempt to return as fast as possible, which is exactly what an evented program
wants in its main thread. libcurl keeps queueing messages into its own queue
about transfer progress. In our case we are only interested in transfers that
are completed. So we extract these messages, and clean up handles whose
transfers are done.
.. rubric:: uvwget/main.c - Reading transfer status.
.. literalinclude:: ../../code/uvwget/main.c
:linenos:
:lines: 58-79
:emphasize-lines: 6,9-10,13-14
Check & Prepare watchers
------------------------
TODO
Loading libraries
-----------------
libuv provides a cross platform API to dynamically load `shared libraries`_.
This can be used to implement your own plugin/extension/module system and is
used by node.js to implement ``require()`` support for bindings. The usage is
quite simple as long as your library exports the right symbols. Be careful with
sanity and security checks when loading third party code, otherwise your
program will behave unpredictably. This example implements a very simple
plugin system which does nothing except print the name of the plugin.
Let us first look at the interface provided to plugin authors.
.. rubric:: plugin/plugin.h
.. literalinclude:: ../../code/plugin/plugin.h
:linenos:
You can similarly add more functions that plugin authors can use to do useful
things in your application [#]_. A sample plugin using this API is:
.. rubric:: plugin/hello.c
.. literalinclude:: ../../code/plugin/hello.c
:linenos:
Our interface defines that all plugins should have an ``initialize`` function
which will be called by the application. This plugin is compiled as a shared
library and can be loaded by running our application::
$ ./plugin libhello.dylib
Loading libhello.dylib
Registered plugin "Hello World!"
.. NOTE::
The shared library filename will be different depending on platforms. On
Linux it is ``libhello.so``.
This is done by using ``uv_dlopen`` to first load the shared library
``libhello.dylib``. Then we get access to the ``initialize`` function using
``uv_dlsym`` and invoke it.
.. rubric:: plugin/main.c
.. literalinclude:: ../../code/plugin/main.c
:linenos:
:lines: 7-
:emphasize-lines: 15, 18, 24
``uv_dlopen`` expects a path to the shared library and sets the opaque
``uv_lib_t`` pointer. It returns 0 on success, -1 on error. Use ``uv_dlerror``
to get the error message.
``uv_dlsym`` stores a pointer to the symbol in the second argument in the third
argument. ``init_plugin_function`` is a function pointer to the sort of
function we are looking for in the application's plugins.
.. _shared libraries: https://en.wikipedia.org/wiki/Shared_library#Shared_libraries
TTY
---
Text terminals have supported basic formatting for a long time, with a `pretty
standardised`_ command set. This formatting is often used by programs to
improve the readability of terminal output. For example ``grep --colour``.
libuv provides the ``uv_tty_t`` abstraction (a stream) and related functions to
implement the ANSI escape codes across all platforms. By this I mean that libuv
converts ANSI codes to the Windows equivalent, and provides functions to get
terminal information.
.. _pretty standardised: https://en.wikipedia.org/wiki/ANSI_escape_sequences
The first thing to do is to initialize a ``uv_tty_t`` with the file descriptor
it reads/writes from. This is achieved with::
int uv_tty_init(uv_loop_t*, uv_tty_t*, uv_file fd, int unused)
The ``unused`` parameter is now auto-detected and ignored. It previously needed
to be set to use ``uv_read_start()`` on the stream.
It is then best to use ``uv_tty_set_mode`` to set the mode to *normal*
which enables most TTY formatting, flow-control and other settings. Other_ modes
are also available.
.. _Other: http://docs.libuv.org/en/v1.x/tty.html#c.uv_tty_mode_t
Remember to call ``uv_tty_reset_mode`` when your program exits to restore the
state of the terminal. Just good manners. Another set of good manners is to be
aware of redirection. If the user redirects the output of your command to
a file, control sequences should not be written as they impede readability and
``grep``. To check if the file descriptor is indeed a TTY, call
``uv_guess_handle`` with the file descriptor and compare the return value with
``UV_TTY``.
Here is a simple example which prints white text on a red background:
.. rubric:: tty/main.c
.. literalinclude:: ../../code/tty/main.c
:linenos:
:emphasize-lines: 11-12,14,17,27
The final TTY helper is ``uv_tty_get_winsize()`` which is used to get the
width and height of the terminal and returns ``0`` on success. Here is a small
program which does some animation using the function and character position
escape codes.
.. rubric:: tty-gravity/main.c
.. literalinclude:: ../../code/tty-gravity/main.c
:linenos:
:emphasize-lines: 19,25,38
The escape codes are:
====== =======================
Code Meaning
====== =======================
*2* J Clear part of the screen, 2 is entire screen
H Moves cursor to certain position, default top-left
*n* B Moves cursor down by n lines
*n* C Moves cursor right by n columns
m Obeys string of display settings, in this case green background (40+2), white text (30+7)
====== =======================
As you can see this is very useful to produce nicely formatted output, or even
console based arcade games if that tickles your fancy. For fancier control you
can try `ncurses`_.
.. _ncurses: https://www.gnu.org/software/ncurses/ncurses.html
.. versionchanged:: 1.23.1: the `readable` parameter is now unused and ignored.
The appropriate value will now be auto-detected from the kernel.
----
.. [#] I was first introduced to the term baton in this context, in Konstantin
Käfer's excellent slides on writing node.js bindings --
https://kkaefer.com/node-cpp-modules/#baton
.. [#] mfp is My Fancy Plugin
.. _libev man page: http://pod.tst.eu/http://cvs.schmorp.de/libev/ev.pod#COMMON_OR_USEFUL_IDIOMS_OR_BOTH