Varnishlog busy: Difference between revisions
(Created page with "Category:Linux = Varnishlog is busy (error 503) = Following varnish log and displaying all busy TX pages do <pre> varnishlog -c -m TxStatus:503 </pre> Now you can see what/…") |
No edit summary |
||
Line 27: | Line 27: | ||
Also, for a list of all options and what they do, visit: |
Also, for a list of all options and what they do, visit: |
||
https://www.varnish-cache.org/docs/trunk/reference/varnishd.html |
https://www.varnish-cache.org/docs/trunk/reference/varnishd.html |
||
= Stream through without caching = |
|||
From varnish 3.0+ it is now possible to make varnish stream content through on-the-fly without having to cache it up front. This is very usefull for well, streaming data and if you run a site with many large files. I'm running multiple mirrors and totalling over 1TB of filedata; having to first let varnish cache the files, then serve it, delete as cache was full, serve etc, just really calls for un-needed data transfers. |
|||
Especially since I'm serving alot of ISO files (CD and DVD). |
|||
So I added a passthrough for those: |
|||
<pre> |
|||
## Inside sub vcl_fetch { } ... |
|||
if (req.url ~ "\.(iso|rpm)$") { |
|||
set beresp.do_stream = true; |
|||
} |
|||
</pre> |
|||
Keep in mind that content will still be cached, if you dont want that you'll need a direct passthrough layer as well. |
|||
So inside vcl_recv: |
|||
<pre> |
|||
## Inside sub vcl_recv |
|||
if (req.url ~ "\.(iso|rpm)$") { |
|||
return (pass); |
|||
} |
|||
</pre> |
|||
This will make all files passthrough without caching and fetching directly from your backend to the client(s). |
|||
= Notice = |
= Notice = |
Latest revision as of 05:42, 6 October 2011
Varnishlog is busy (error 503)
Following varnish log and displaying all busy TX pages do
varnishlog -c -m TxStatus:503
Now you can see what/if you have any pages being served a busy/unavailable page (503).
Fixing
I had alot of 503 errors because of a smallish malloc assignment behind webserver serving alot of static pages.
I had to increase the default nuke_limit from 10 to 150 so the cache could be 'cleaned' up. In your startup for varnish add something like:
-p nuke_limit=150 \ -p thread_pool_add_delay=2 \ -p thread_pools=2 \ -p thread_pool_max=4000 \ -p thread_pool_min=400 \ -p sess_workspace=16384 \
I added the last 5 lines as well based on some calculations done in best practises, but haven't really looked much into it yet ( http://kristianlyng.wordpress.com/2010/01/26/varnish-best-practices )
Also, for a list of all options and what they do, visit: https://www.varnish-cache.org/docs/trunk/reference/varnishd.html
Stream through without caching
From varnish 3.0+ it is now possible to make varnish stream content through on-the-fly without having to cache it up front. This is very usefull for well, streaming data and if you run a site with many large files. I'm running multiple mirrors and totalling over 1TB of filedata; having to first let varnish cache the files, then serve it, delete as cache was full, serve etc, just really calls for un-needed data transfers.
Especially since I'm serving alot of ISO files (CD and DVD).
So I added a passthrough for those:
## Inside sub vcl_fetch { } ... if (req.url ~ "\.(iso|rpm)$") { set beresp.do_stream = true; }
Keep in mind that content will still be cached, if you dont want that you'll need a direct passthrough layer as well.
So inside vcl_recv:
## Inside sub vcl_recv if (req.url ~ "\.(iso|rpm)$") { return (pass); }
This will make all files passthrough without caching and fetching directly from your backend to the client(s).
Notice
Keep in mind all this is for varnish 3.0.0+!!