Skip to content

Commit d3c0a99

Browse files
committed
FreeBSD tuning: removed outdated or incomplete content.
1 parent 120efa0 commit d3c0a99

File tree

1 file changed

+5
-159
lines changed

1 file changed

+5
-159
lines changed

xml/en/docs/freebsd_tuning.xml

+5-159
Original file line numberDiff line numberDiff line change
@@ -11,20 +11,6 @@
1111
rev="1">
1212

1313

14-
<section name="Syncache and syncookies">
15-
16-
<para>
17-
We look at how various kernel settings affect ability of the kernel
18-
to process requests. Let&rsquo;s start with TCP/IP connection establishment.
19-
</para>
20-
21-
<para>
22-
[ syncache, syncookies ]
23-
</para>
24-
25-
</section>
26-
27-
2814
<section id="listen_queues"
2915
name="Listen queues">
3016

@@ -59,7 +45,7 @@ receiving 1.5 times connections than the limit before it starts to discard
5945
the new connections. You may increase the limit using
6046

6147
<programlisting>
62-
sysctl kern.ipc.somaxconn=4096
48+
sysctl kern.ipc.soacceptqueue=4096
6349
</programlisting>
6450

6551
However, note that the queue is only a damper to quench bursts.
@@ -72,30 +58,10 @@ listen 80 backlog=1024;
7258
</programlisting>
7359

7460
However, you may not set it more than the current
75-
<path>kern.ipc.somaxconn</path> value.
61+
<path>kern.ipc.soacceptqueue</path> value.
7662
By default nginx uses the maximum value of FreeBSD kernel.
7763
</para>
7864

79-
<para>
80-
<programlisting>
81-
</programlisting>
82-
</para>
83-
84-
<para>
85-
<programlisting>
86-
</programlisting>
87-
</para>
88-
89-
</section>
90-
91-
92-
<section id="sockets_and_files"
93-
name="Sockets and files">
94-
95-
<para>
96-
[ sockets, files ]
97-
</para>
98-
9965
</section>
10066

10167

@@ -123,7 +89,7 @@ And on the Internet you may see recommendations to increase
12389
the buffer sizes to one or even several megabytes.
12490
However, such large buffer sizes are suitable for local networks
12591
or for networks under your control.
126-
Since on the Internet a slow modem client may ask a large file
92+
Since on the Internet a slow network client may ask a large file
12793
and then it will download the file during several minutes if not hours.
12894
All this time the megabyte buffer will be bound to the slow client,
12995
although we may devote just several kilobytes to it.
@@ -143,10 +109,6 @@ and devotes just tens kilobytes to connections,
143109
therefore it does not require the large buffer sizes.
144110
</para>
145111

146-
<para>
147-
[ dynamic buffers ]
148-
</para>
149-
150112
</section>
151113

152114

@@ -161,7 +123,7 @@ of data, for example, TCP/IP header. However, the mbufs point mostly
161123
to other data stored in the <i>mbuf clusters</i> or <i>jumbo clusters</i>,
162124
and in this kind they are used as the chain links only.
163125
The mbuf cluster size is 2K.
164-
The jumbo cluster size can be equal to a CPU page size (4K for i386 and amd64),
126+
The jumbo cluster size can be equal to a CPU page size (4K for amd64),
165127
9K, or 16K.
166128
The 9K and 16K jumbo clusters are used mainly in local networks with Ethernet
167129
frames larger than usual 1500 bytes, and they are beyond the scope of
@@ -214,21 +176,6 @@ Note that all allocated mbufs clusters will take about 440M physical memory:
214176
All allocated page size jumbo clusters will take yet about 415M physical memory:
215177
(100000 &times; (4096 + 256)).
216178
And together they may take 845M.
217-
218-
<note>
219-
The page size jumbo clusters have been introduced in FreeBSD 7.0.
220-
In earlier versions you should tune only 2K mbuf clusters.
221-
Prior to FreeBSD 6.2, the <path>kern.ipc.nmbclusters</path> value can be
222-
set only on the boot time via loader tunable.
223-
</note>
224-
</para>
225-
226-
<para>
227-
On the amd64 architecture FreeBSD kernel can use for sockets buffers
228-
almost all physical memory,
229-
while on the i386 architecture no more than 2G memory can be used,
230-
regardless of the available physical memory.
231-
We will discuss the i386 specific tuning later.
232179
</para>
233180

234181
<para>
@@ -243,88 +190,11 @@ Thus, sendfile decreases both CPU usage by omitting two memory copy operations,
243190
and memory usage by using the cached file pages.
244191
</para>
245192

246-
<para>
247-
And again, the amd64 sendfile implementation is the best:
248-
the zeros in the <nobr>“<literal>netstat -m</literal>”</nobr> output
249-
<programlisting>
250-
...
251-
<b>0/0/0</b> sfbufs in use (current/peak/max)
252-
...
253-
</programlisting>
254-
mean that there is no <i>sfbufs</i> limit,
255-
while on i386 architecture you should to tune them.
256-
</para>
257-
258-
<!--
259-
260-
<para>
261-
262-
<programlisting>
263-
vm.pmap.pg_ps_enabled=1
264-
265-
vm.kmem_size=3G
266-
267-
net.inet.tcp.tcbhashsize=32768
268-
269-
net.inet.tcp.hostcache.cachelimit=40960
270-
net.inet.tcp.hostcache.hashsize=4096
271-
net.inet.tcp.hostcache.bucketlimit=10
272-
273-
net.inet.tcp.syncache.hashsize=1024
274-
net.inet.tcp.syncache.bucketlimit=100
275-
</programlisting>
276-
277-
<programlisting>
278-
279-
net.inet.tcp.syncookies=0
280-
net.inet.tcp.rfc1323=0
281-
net.inet.tcp.sack.enable=1
282-
net.inet.tcp.fast_finwait2_recycle=1
283-
284-
net.inet.tcp.rfc3390=0
285-
net.inet.tcp.slowstart_flightsize=2
286-
287-
net.inet.tcp.recvspace=8192
288-
net.inet.tcp.recvbuf_auto=0
289-
290-
net.inet.tcp.sendspace=16384
291-
net.inet.tcp.sendbuf_auto=1
292-
net.inet.tcp.sendbuf_inc=8192
293-
net.inet.tcp.sendbuf_max=131072
294-
295-
# 797M
296-
kern.ipc.nmbjumbop=192000
297-
# 504M
298-
kern.ipc.nmbclusters=229376
299-
# 334M
300-
kern.ipc.maxsockets=204800
301-
# 8M
302-
net.inet.tcp.maxtcptw=163840
303-
# 24M
304-
kern.maxfiles=204800
305-
</programlisting>
306-
307-
</para>
308-
309-
<para>
310-
311-
<programlisting>
312-
sysctl net.isr.direct=0
313-
</programlisting>
314-
315-
<programlisting>
316-
sysctl net.inet.ip.intr_queue_maxlen=2048
317-
</programlisting>
318-
319-
</para>
320-
321-
-->
322-
323193
</section>
324194

325195

326196
<section id="proxying"
327-
name="Proxying">
197+
name="Outgoing connections">
328198

329199

330200
<programlisting>
@@ -345,28 +215,4 @@ net.inet.tcp.fast_finwait2_recycle=1
345215

346216
</section>
347217

348-
349-
<section id="i386_specific_tuning"
350-
name="i386 specific tuning">
351-
352-
<para>
353-
[ KVA, KVM, nsfbufs ]
354-
</para>
355-
356-
</section>
357-
358-
359-
<section id="minor_optimizations"
360-
name="Minor optimizations">
361-
362-
<para>
363-
364-
<programlisting>
365-
sysctl kern.random.sys.harvest.ethernet=0
366-
</programlisting>
367-
368-
</para>
369-
370-
</section>
371-
372218
</article>

0 commit comments

Comments
 (0)