[Bf-committers] Blender 2.5 malicious scripting

Benjamin Tolputt btolputt at internode.on.net
Tue Feb 23 11:33:31 CET 2010


<snip alot of details & suggestions>

It is my impression that to properly secure a machine against malicious
Python scripts - you would need to build a sandbox. This is not possible
with CPython (there are known methods around all the serious sandbox
attempts I could find) and this is what is used by Blender. Not only
this, but a solid portion of the use-cases for Blender require access to
files on the user's hard-drive (for import & export purposes). As such,
sand-boxing Python is not really the solution either.

In my opinion, the security of Blender in this regard is down to two
things. Firstly, opening a Blender file should not run embedded scripts
until the user has been warned/asked about it and, secondly, there needs
to be some user education on the safety aspects (or lack thereof) in
running unknown scripts. Python is simply too flexible (and hence
"insecure" in the current context) to do anything else.

On a side note: the only properly secure method of sand-boxing Python
seems to be a complicated use of PyPy-C and running all non-safe
accesses through a separate "sandbox" process. It is not really a
language/platform designed with security in mind. One of the reasons I
would have liked a move to Lua or similar language where one can lock
down the script's environment. Totally understand why it didn't happen
though (the momentum of existing skills & scripts is hard to push aside,
even for a better solution).

-- 
Regards,

Benjamin Tolputt
Analyst Programmer



More information about the Bf-committers mailing list