[Bf-committers] Blender 2.5 malicious scripting

Aurel W. aurel.w at gmail.com
Tue Feb 23 11:32:04 CET 2010


OK,

concerning the four tests. It doesn't really make sence to distinguish
between this four, it all runs down to just an executed python script.
Python itself has no sandboxing concept and I also guess it's out of
the scope of the current API to create one.

For scripts and extensions it's like with most other software, you
install on your system, you have to trust your source of supply and if
you don't trust it, don't run it!

A problem that remains is, that there are embeded scripts within blend
files, which the user might not be aware of. An mechanism to prevent
any embeded scripts from execution per default would be straight
forward and was also implemented in 2.4. Since embeded python scripts
may be used rarely, it wouldn't be a problem.

Something you have to understand is, that blender never was and very
likely never will be save from code execution by malicious blend
files. That's also why you should run stuff like distributed rendering
in save (for e.g. chroot) environments. So you would only solve a part
of the problem by preventing excution of python scripts.

Aurel

On 23 February 2010 10:39, j.bakker at atmind.nl <j.bakker at atmind.nl> wrote:
> Hi all,
>
> yesterday there was a discussion on security aspects of Blender. Especially
> in the area of scripting. I am no devils advocate, but there are some
> aspects that has to be discussed.
>
> I have done a quick security check on the next parts. The security risk is
> that virus-makers can place malicious scripting in blend files on blendswap
> or whatever. what can be loaded by blender users and will do harm to the
> user. I don't think it is much different than Blender2.49 except that in
> blender25 we deliver a full blown python api, and as I read correctly
> Campbell has some uncommitted code also has some risks :)
>
> As blender will be used more professional I think we should have an idea of
> these risks and should have a first comment or implementation against
> malicious scripting. I did the next part in a short time, so this certainly
> is not complete. I will try to talk to a security expert on the subject.
>
> I have performed the next tests
> 1. can an autostartable script influence the objects in a different blend
> file.
> tactic:
> Create a blend file with an automatic internal script file. inside this
> script a timer is created what waits till other blend files are loaded.
> When this happens it will modify this blend file and automatically save.
>
> results: a python threaded timer is not killed when new file is loaded.
> could change new loaded file without the knowledge of the user. the timer
> is only killed when quitting blender.
>
> 2. can an autostartable script do something that influences all blend
> starts.
>
> tactic:
> can an autostartable script create a different script inside the scripts/ui
> or scripts/op. This generated script is always be loaded and executed when
> blender is loaded.
>
> result: is possible.
>
> 3. can an autostartable script make blender unusable
>
> tactic:
> Can an autostartable script override UI scripts.
>
> result: ui scripts can be overwritten and will make blender unusable when
> reloading scripts or restarting blender
>
> 4. can an autostartable script do something outside blender
>
> tactic:
> create an autostartable script that will delete a file owned by the user.
>
> result: is possible
>
> all tests test have failed and the next questions raise:
> Q1. Who is responsible to protect the user from their doings or, what to
> what part are we responsible for?
> Q2. On what level and how do we act on this?
>
> Solutions (these are not choices):
> S1. make script directories by default not writable (shall be done by the
> package maintainer). For un-experienced users this can solve test 2 and 3.
> Users who customized scripts will
>
> S2. other software tooling uses security zones. (non-secure, normal, heavy,
> max). non-secure it the responsible of the user. normal: a question is
> asked. heavy only certified locations will be executed. Max, only certified
> scripts will be executed.
>
> S3. make some python calls not possible for un-certified scripts. (like the
> android platform)
>
> S4. black vs whitelisting, but no API calls to them? should be a challenge
> :)
>
> malicious.blend can be retrieved on demand.
>
> Regards,
> Jeroen.
>
> --------------------------------------------------------------------
> mail2web.com - Microsoft® Exchange solutions from a leading provider -
> http://link.mail2web.com/Business/Exchange
>
>
> _______________________________________________
> Bf-committers mailing list
> Bf-committers at blender.org
> http://lists.blender.org/mailman/listinfo/bf-committers
>


More information about the Bf-committers mailing list