The image below demonstrates how to adjust these settings:
Any distribution in $d$ dimensions can be generated by taking a set of $d$ variables that are normally distributed and mapping them through a sufficiently complicated function (ex: neural networks).
Here, $\theta$ and $\varphi$ are the parameters of the neural networks.
Overall, the discriminator wants this sum to be maximized: \begin{equation} V(D,G) = \mathbb{E}_{\textbf{x} \sim p_d}\underbrace{[\log(D(\textbf{x}))]}_{\text{chance of real data} \\ \text{ being called real}} + \mathbb{E}_{\textbf{z} \sim p_z}\underbrace{[\log(1 - D(G(\textbf{z})))]}_{\text{chance of fake data}\\ \text{being called fake}} \end{equation}
Since the generator has no control over $\mathbb{E}_{\textbf{x} \sim p_d}[\log(D(\textbf{x}))]$, overall, the generator wants this sum to be minimized: \begin{equation} V(D,G) = \mathbb{E}_{\textbf{x} \sim p_d}\underbrace{[\log(D(\textbf{x}))]}_{\text{chance of real data } \\ \text{being called real}} + \mathbb{E}_{\textbf{z} \sim p_z}\underbrace{[\log(1-D(G(\textbf{z})))]}_{\text{chance of fake data} \\ \text{ being called fake}} \end{equation}
Objective:
\begin{equation} \min_{G}\max_{D}V(D, G) = \mathbb{E}_{\mathbf{x} \sim p_d}[\log(D(\textbf{x}))] + \mathbb{E}_{\mathbf{z} \sim p_z}[\log(1 - D(G(\textbf{z})))] \end{equation}Objective:
\begin{equation} \min_{G}\max_{D}V(D, G) = \mathbb{E}_{\mathbf{x} \sim p_d}[\log(D(\textbf{x}))] + \mathbb{E}_{\mathbf{z} \sim p_z}[\log(1 - D(G(\textbf{z})))] \end{equation}A stylish woman walks down a Tokyo street filled with warm glowing neon and animated city signage. She wears a black leather jacket, a long red dress, and black boots, and carries a black purse. She wears sunglasses and red lipstick. She walks confidently and casually. The street is damp and reflective, creating a mirror effect of the colorful lights. Many pedestrians walk about.