Within the world of architecture, there is a term known as “hostile” architecture. This refers to design meant to prevent individuals from utilizing a space outside of intended uses.
Often this manifests as forms of physical elements that prevent uses such as comfortable rest for the homeless or community activations such as skating through various blocking elements. This adversarial approach is used to train machine learning algorithms by providing a negative to prove the quality of a positive. Just as the communities around these hostile architectural designs find ways to move in and around them, so do these algorithms find their way to a relative/desired truth. Software itself is also frequently defined as a form of architecture. This work seeks to explore ways in which an adversarial and directly hostile software/hardware design approach can be used in the shaping of performance to force or direct the performer into new patterns by preventing those common to that individual.