11
11
rev =" 1" >
12
12
13
13
14
- <section name =" Syncache and syncookies" >
15
-
16
- <para >
17
- We look at how various kernel settings affect ability of the kernel
18
- to process requests. Let’ s start with TCP/IP connection establishment.
19
- </para >
20
-
21
- <para >
22
- [ syncache, syncookies ]
23
- </para >
24
-
25
- </section >
26
-
27
-
28
14
<section id =" listen_queues"
29
15
name =" Listen queues" >
30
16
@@ -59,7 +45,7 @@ receiving 1.5 times connections than the limit before it starts to discard
59
45
the new connections. You may increase the limit using
60
46
61
47
<programlisting >
62
- sysctl kern.ipc.somaxconn =4096
48
+ sysctl kern.ipc.soacceptqueue =4096
63
49
</programlisting >
64
50
65
51
However, note that the queue is only a damper to quench bursts.
@@ -72,30 +58,10 @@ listen 80 backlog=1024;
72
58
</programlisting >
73
59
74
60
However, you may not set it more than the current
75
- <path >kern.ipc.somaxconn </path > value.
61
+ <path >kern.ipc.soacceptqueue </path > value.
76
62
By default nginx uses the maximum value of FreeBSD kernel.
77
63
</para >
78
64
79
- <para >
80
- <programlisting >
81
- </programlisting >
82
- </para >
83
-
84
- <para >
85
- <programlisting >
86
- </programlisting >
87
- </para >
88
-
89
- </section >
90
-
91
-
92
- <section id =" sockets_and_files"
93
- name =" Sockets and files" >
94
-
95
- <para >
96
- [ sockets, files ]
97
- </para >
98
-
99
65
</section >
100
66
101
67
@@ -123,7 +89,7 @@ And on the Internet you may see recommendations to increase
123
89
the buffer sizes to one or even several megabytes.
124
90
However, such large buffer sizes are suitable for local networks
125
91
or for networks under your control.
126
- Since on the Internet a slow modem client may ask a large file
92
+ Since on the Internet a slow network client may ask a large file
127
93
and then it will download the file during several minutes if not hours.
128
94
All this time the megabyte buffer will be bound to the slow client,
129
95
although we may devote just several kilobytes to it.
@@ -143,10 +109,6 @@ and devotes just tens kilobytes to connections,
143
109
therefore it does not require the large buffer sizes.
144
110
</para >
145
111
146
- <para >
147
- [ dynamic buffers ]
148
- </para >
149
-
150
112
</section >
151
113
152
114
@@ -161,7 +123,7 @@ of data, for example, TCP/IP header. However, the mbufs point mostly
161
123
to other data stored in the <i >mbuf clusters</i > or <i >jumbo clusters</i >,
162
124
and in this kind they are used as the chain links only.
163
125
The mbuf cluster size is 2K.
164
- The jumbo cluster size can be equal to a CPU page size (4K for i386 and amd64),
126
+ The jumbo cluster size can be equal to a CPU page size (4K for amd64),
165
127
9K, or 16K.
166
128
The 9K and 16K jumbo clusters are used mainly in local networks with Ethernet
167
129
frames larger than usual 1500 bytes, and they are beyond the scope of
@@ -214,21 +176,6 @@ Note that all allocated mbufs clusters will take about 440M physical memory:
214
176
All allocated page size jumbo clusters will take yet about 415M physical memory:
215
177
(100000 × (4096 + 256)).
216
178
And together they may take 845M.
217
-
218
- <note >
219
- The page size jumbo clusters have been introduced in FreeBSD 7.0.
220
- In earlier versions you should tune only 2K mbuf clusters.
221
- Prior to FreeBSD 6.2, the <path >kern.ipc.nmbclusters</path > value can be
222
- set only on the boot time via loader tunable.
223
- </note >
224
- </para >
225
-
226
- <para >
227
- On the amd64 architecture FreeBSD kernel can use for sockets buffers
228
- almost all physical memory,
229
- while on the i386 architecture no more than 2G memory can be used,
230
- regardless of the available physical memory.
231
- We will discuss the i386 specific tuning later.
232
179
</para >
233
180
234
181
<para >
@@ -243,88 +190,11 @@ Thus, sendfile decreases both CPU usage by omitting two memory copy operations,
243
190
and memory usage by using the cached file pages.
244
191
</para >
245
192
246
- <para >
247
- And again, the amd64 sendfile implementation is the best:
248
- the zeros in the <nobr >“<literal >netstat -m</literal >”</nobr > output
249
- <programlisting >
250
- ...
251
- <b >0/0/0</b > sfbufs in use (current/peak/max)
252
- ...
253
- </programlisting >
254
- mean that there is no <i >sfbufs</i > limit,
255
- while on i386 architecture you should to tune them.
256
- </para >
257
-
258
- <!--
259
-
260
- <para>
261
-
262
- <programlisting>
263
- vm.pmap.pg_ps_enabled=1
264
-
265
- vm.kmem_size=3G
266
-
267
- net.inet.tcp.tcbhashsize=32768
268
-
269
- net.inet.tcp.hostcache.cachelimit=40960
270
- net.inet.tcp.hostcache.hashsize=4096
271
- net.inet.tcp.hostcache.bucketlimit=10
272
-
273
- net.inet.tcp.syncache.hashsize=1024
274
- net.inet.tcp.syncache.bucketlimit=100
275
- </programlisting>
276
-
277
- <programlisting>
278
-
279
- net.inet.tcp.syncookies=0
280
- net.inet.tcp.rfc1323=0
281
- net.inet.tcp.sack.enable=1
282
- net.inet.tcp.fast_finwait2_recycle=1
283
-
284
- net.inet.tcp.rfc3390=0
285
- net.inet.tcp.slowstart_flightsize=2
286
-
287
- net.inet.tcp.recvspace=8192
288
- net.inet.tcp.recvbuf_auto=0
289
-
290
- net.inet.tcp.sendspace=16384
291
- net.inet.tcp.sendbuf_auto=1
292
- net.inet.tcp.sendbuf_inc=8192
293
- net.inet.tcp.sendbuf_max=131072
294
-
295
- # 797M
296
- kern.ipc.nmbjumbop=192000
297
- # 504M
298
- kern.ipc.nmbclusters=229376
299
- # 334M
300
- kern.ipc.maxsockets=204800
301
- # 8M
302
- net.inet.tcp.maxtcptw=163840
303
- # 24M
304
- kern.maxfiles=204800
305
- </programlisting>
306
-
307
- </para>
308
-
309
- <para>
310
-
311
- <programlisting>
312
- sysctl net.isr.direct=0
313
- </programlisting>
314
-
315
- <programlisting>
316
- sysctl net.inet.ip.intr_queue_maxlen=2048
317
- </programlisting>
318
-
319
- </para>
320
-
321
- -->
322
-
323
193
</section >
324
194
325
195
326
196
<section id =" proxying"
327
- name =" Proxying " >
197
+ name =" Outgoing connections " >
328
198
329
199
330
200
<programlisting >
@@ -345,28 +215,4 @@ net.inet.tcp.fast_finwait2_recycle=1
345
215
346
216
</section >
347
217
348
-
349
- <section id =" i386_specific_tuning"
350
- name =" i386 specific tuning" >
351
-
352
- <para >
353
- [ KVA, KVM, nsfbufs ]
354
- </para >
355
-
356
- </section >
357
-
358
-
359
- <section id =" minor_optimizations"
360
- name =" Minor optimizations" >
361
-
362
- <para >
363
-
364
- <programlisting >
365
- sysctl kern.random.sys.harvest.ethernet=0
366
- </programlisting >
367
-
368
- </para >
369
-
370
- </section >
371
-
372
218
</article >
0 commit comments