A non-ASCII `split_path` caused `WithRequestSplitPath`'s error to be
swallowed and a nil `RequestOption` to be stored, panicking on the first
request. The fix propagates the error from `Provision`, rejecting the
config at startup with a clear message.
PHP-FPM recycles worker processes after a configurable number of
requests (`pm.max_requests`), preventing memory leaks from accumulating
over time. FrankenPHP keeps PHP threads alive indefinitely, so any leak
in PHP extensions (e.g. ZTS builds of profiling tools like Blackfire) or
application code compounds over hours/days. In production behind reverse
proxies like Cloudflare, this can lead to gradual resource exhaustion
and eventually 502 errors with no visible warnings in logs.
This PR adds a `max_requests` option in the global `frankenphp` block
that automatically restarts PHP threads after a given number of
requests, fully cleaning up the thread's memory and state. It applies to
both regular (module mode) and worker threads.
When a thread reaches the limit it exits the C thread loop, triggering a
full cleanup including any memory leaked by extensions. A fresh thread
is then booted transparently. Other threads continue serving requests
during the restart.
This cannot be done from userland PHP: restarting a worker script from
PHP only resets PHP-level state, not the underlying C thread-local
storage where extension-level leaks accumulate. And in module mode
(without workers), there is no userland loop to count requests at all.
Default is `0` (unlimited), preserving existing behavior.
Usage:
```caddyfile
{
frankenphp {
max_requests 500
}
}
```
Changes:
- New `max_requests` Caddyfile directive in the global `frankenphp`
block
- New `WithMaxRequests` functional option
- New `Rebooting` and `RebootReady` states in the thread state machine
for restart coordination
- Regular thread restart in `threadregular.go`
- Worker thread restart in `threadworker.go`
- Safe shutdown: `shutdown()` waits for in-flight reboots to complete
before draining threads
- Tests for both module and worker mode (sequential and concurrent),
with debug log verification
- Updated docs
I've been advised during Dutch PHP Conference (thanks @dseguy!) to move
the "Generate your extension" section before going deep into type
juggling. I agree: the reader will have way more satisfaction and will
be eager to dig the topic if the basic case already works.
There may be more to improve but this is a first quick-win.
Currently, the `extension-init` command automatically generates a
boilerplate `README.md`. While helpful for initial setups, this behavior
becomes destructive during the iterative development phase. If a
developer has customized their README and later runs `extension-init` to
update function signatures or add new functions, their custom
documentation is overwritten without warning.
People at conferences often come by and ask how it is to migrate from
FPM to FrankenPHP.
I think that a page "Migrating from..." would be very reassuring for
newcomers. It only covers simple cases, but most complex ones can be
handled by LLMs if necessary.
Fixes#2268 (and maybe others)
In that issue, a timeout during a `curl_multi` request leads to a fatal
error and bailout during `php_request_shutdown()`. After looking at the
[FPM](https://github.com/php/php-src/blob/9011bd31d7c26b2f255e550171548eb024d1e4ce/sapi/fpm/fpm/fpm_main.c#L1926)
implementation I realized it also wraps `php_request_shutdown()` with a
`zend_bailout`, which we don't. This PR wraps the shutdown function and
restarts ZTS in case of an unexpected bailout, which fixes#2268 and
prevents any potential crashes from bailouts during shutdown.
Still a draft since I think it might make sense to wrap the whole
request loop in a `zend_try`.
@henderkes This resolve the `symlink` issue for the symlink deployment
strategy.
_alpine already have the force flag._
---------
Signed-off-by: Nordine <5256921+kitro@users.noreply.github.com>
this can potentially save us a few internal calls to zend_hash_do_resize
while it loops over the source table (main_thread_env)
`8 -> 16 -> 32 -> 64 -> 128` becomes `8 -> target_size (rounded to power
of 2) 128`.
## Summary
Hoists the `otter.LoaderFunc` closure in `GetUnCommonHeader` to a
package-level `loader` var, so it is allocated once at init time instead
of being re-created on every call.
This is a minor cleanup — the previous code created a new `LoaderFunc`
closure each time `GetUnCommonHeader` was called. While otter's
cache-hit path is fast enough that this doesn't show a measurable
difference in end-to-end benchmarks, avoiding the repeated allocation is
strictly better.
## What changed
**Before** (closure created per call):
```go
func GetUnCommonHeader(ctx context.Context, key string) string {
phpHeaderKey, err := headerKeyCache.Get(
ctx,
key,
otter.LoaderFunc[string, string](func(_ context.Context, key string) (string, error) {
return "HTTP_" + headerNameReplacer.Replace(strings.ToUpper(key)) + "\x00", nil
}),
)
...
}
```
**After** (closure allocated once):
```go
var loader = otter.LoaderFunc[string, string](func(_ context.Context, key string) (string, error) {
return "HTTP_" + headerNameReplacer.Replace(strings.ToUpper(key)) + "\x00", nil
})
func GetUnCommonHeader(ctx context.Context, key string) string {
phpHeaderKey, err := headerKeyCache.Get(ctx, key, loader)
...
}
```
## Benchmarks
Apple M1 Pro, 8 runs, `benchstat` comparison — no regressions, no extra
allocations:
| Benchmark | main | PR | vs base |
|---|---|---|---|
| HelloWorld | 41.81µ ± 2% | 42.75µ ± 5% | ~ (p=0.065) |
| ServerSuperGlobal | 73.36µ ± 2% | 74.20µ ± 3% | ~ (p=0.105) |
| UncommonHeaders | 69.03µ ± 3% | 68.71µ ± 1% | ~ (p=0.382) |
All results within noise. Zero change in allocations.