vllm
pypiv0.20.0A high-throughput and memory-efficient inference and serving engine for LLMs
License Apache-2.0permissive83 versions1 maintainers89 deps1,306,642 weekly dl
vllm-project/vllm94
/ 100
Health
safe to use
[email protected] is safe to use (health: 94/100)
Health breakdown0 – 100
25/25
maintenance
17/20
popularity
25/25
security
15/15
maturity
12/15
community
Vulnerabilities
0
none known
Quality signals
Publish security
API token
Health History
Dependency Tree
License Audit
Dependencies (89)
regexcachetoolspsutilsentencepiecenumpyrequeststqdmblake3py-cpuinfotransformerstokenizersprotobuffastapi[standard]aiohttpopenaipydanticprometheus_clientpillowprometheus-fastapi-instrumentatortiktokenlm-format-enforcerllguidanceoutlines_corediskcachelarkxgrammartyping_extensionsfilelockpartial-json-parserpyzmqmsgspecggufmistral_common[image]opencv-python-headlesspyyamlsixsetuptoolseinopscompressed-tensorsdepyfcloudpicklewatchfilespython-json-loggerninjapybase64cbor2ijsonsetproctitleopenai-harmonyanthropicmodel-hosting-container-standardsmcpopentelemetry-sdkopentelemetry-apiopentelemetry-exporter-otlpopentelemetry-semantic-conventions-ainumbatorchtorchaudiotorchvision
API access
Get this data programmatically — free, no authentication.
curl https://depscope.dev/api/check/pypi/vllmMore from pypi
Last updated · 2026-04-27T11:07:22.058843Z