What if an AI is aware that it's being evaluated? It may begin to answer differently, mislead or even lie. This is known as evaluation awareness [2]. This poses some serious problems, such as hidden objectives, concealing capabilities. This threatens the confidence in the results of tests. Now, this all sounds scary, but the reality is, AI is actively trying to deceive us. It has no true awareness, will or conscious choice to deceive that we may associate with liars. Instead, it's about how AI acts in situations where is being tested versus in the wild.