Hello!

Don't aim for success if you want it; just do what you love and believe in, and it will come naturally.
A QUICK LINK TO My Jumble of Computer Vision
Pub. Date:Feb. 22, 2017, 10:12 p.m. Topic:Computer Vision Tag:Reading Note

TITLE: Adversarial Discriminative Domain Adaptation

AUTHOR: Eric Tzeng, Judy Hoffman, Kate Saenko, Trevor Darrell

ASSOCIATION: UC Berkeley, Stanford University, Boston University

FROM: arXiv:1702.05464

CONTRIBUTIONS

  1. A novel unified framework for adversarial domain adaptation is proposed, which is called Adversarial Discriminative Domain Adaptation (ADDA).
  2. Design choices such as weight-sharing, base models, and adversarial losses and subsumes previous work, are unified.

METHOD

The main idea of this work is to find a map function project target data, the data for testing, to the source data domain, that are used for training. The training procedure is illustrated in the following figure.

As the figure shows, first pre-train a source encoder CNN using labeled source image examples. Next, perform adversarial adaptation by learning a target encoder CNN such that a discriminator that sees encoded source and target examples cannot reliably predict their domain label. During testing, target images are mapped with the target encoder to the shared feature space and classified by the source classifier. Dashed lines indicate fixed network parameters.

The ADDA method can be formalized as:

$$ \min \limits_{M_{s}, C} \mathcal{L}_{cls}(\mathbf{X_{s}}, \mathbf{Y_{s}}) = -\mathbb{E}_{(\mathbf{x}_{s}, y_{s})\sim(\mathbf{X}_{s}, \mathbf{Y}_{s})}[\sum_{k=1}^{K} \mathbf{1}_{[k=y_{s}]} \log C(M_s(\mathbf{x_s}))]$$

$$ \min \limits_{D} \mathcal{L}_{ \mathbf{adv}_D}(\mathbf{X_{s}}, \mathbf{X_{t}}, \mathbf{M_{s}}, \mathbf{M_{t}}) = -\mathbb{E}_{\mathbf{x}_{s}\sim \mathbf{X}_{s}} [\log D(M_s(\mathbf{x_s}))] -\mathbb{E}_{\mathbf{x}_{t}\sim \mathbf{X}_{t}} [\log (1-D(M_t(\mathbf{x_t})))]$$

$$ \min \limits_{M_s,M_t} \mathcal{L}_{ \mathbf{adv}_M}(\mathbf{X_{s}}, \mathbf{X_{t}}, D) = -\mathbb{E}_{\mathbf{x}_{s}\sim \mathbf{X}_{t}} [\log D(M_t(\mathbf{x_t}))] $$

The first formula is a typical supervised learning. The second formula is just like what has been proposed in GAN. It is used to learn a discriminator to tell target data from source data. The first term can be ignored because $M_s$ is fixed. The third formula is used to learn a $M_t$ mapping data from targe domain to source domain.


Pub. Date:Feb. 19, 2017, 6:03 p.m. Topic:Life Discovery Tag:杂七杂八

I've been a great fun of ancient history and warfares since my childhood. Maybe the first enlightenment comes from the serial computer games Age of Empires. I didn't know Joan of Arc until I played the campaign in Age of Empires II when I was 11 or 12 years old. The story in the ganme was so attractive that I searched who Joan of Arc was and what she did on Internet. I was touched by her patriotic acts and sacrifices. I even began to be interested in France history and wanted to study French in university, though I finally chose EE as my major and became an engineer in AI.

I played Age of Empires II HD a while this weekends because I found it was on sale in Steam. It brought me back to the time of my childhood. The memories of playing this game with my friends were recalled. We had fun playing this game, read stories of heroes and quarrelled about who was the greatest one in history. This is a classic computer game.


Pub. Date:Feb. 16, 2017, 6:59 p.m. Topic:Life Discovery Tag:Drawing


Pub. Date:Feb. 15, 2017, 8:43 p.m. Topic:Life Discovery Tag:杂七杂八

Forgiveness comes from courage.


Pub. Date:Feb. 12, 2017, 11:21 p.m. Topic:Life Discovery Tag:小事情

原来“不转死全家”来自于《午夜凶铃》