What did the Shanghai fashion show look like a hundred years ago? Recently, Otani once again used AI to restore the Shanghai fashion show a hundred years ago. The video vividly reproduces the fashion frontiers of 1929, bob heads, evening dresses, mink coats, and jade earrings. More importantly, these young ladies are not small. This article from the micro-channel public number: New Ji-won (ID: AI_era) , Author: New Ji-won, Editor: Ya new title figure from: Original courtesy, beep Bilibili up blogger@大谷Spitzer

What was the Shanghai fashion show like a hundred years ago?

Up blogger @大谷Spitzer, who repaired the original Beijing soundtrack 100 years ago, restored the Shanghai fashion show 100 years ago with AI again! Through artificial intelligence, the steps of coloring, frame supplementing, and resolution expansion are completed.

The source of this video comes from the film collection of the University of South Carolina Image Library. It was shot by Fox on March 4, 1929, recording the most fashionable and cutting-edge trends of that era.

Time travel, experience the Shanghai Fashion Show in 1929

This video also restores the original sound of the era. The name of the interpreter’s lady is Miss Sze. The other two young ladies appearing in the hair accessories introduction section below are Mrs. Kum and Mrs. Fan.

First come to the fashion show, the beauty of the combination of east and west, people can’t help but exclaim too beautiful.

With the sound of courtesy and music, Miss Sze entered the venue, dressed in an evening dress that blended Chinese and Western fashion elements. The patterns on the skirt are all hand-drawn, which is also a short skirt style popular in the West. The neckline design of the skirt is a high collar, which retains the oriental elements.

Next, this set is a chiffon velvet evening dress with asymmetric group lines. Netizens commented that the beauty of the dress transcends the times, and this dress is not considered outdated today.

The next set is the “Shanghai Beach” style, with a mink coat outside and a cheongsam inside.

The next set is the most popular daily dress at the time.

Bobo head, emerald ear studs, with ring and brooch,A netizen said, “The Chinese woman who dressed like this and spoke English a hundred years ago… is a real high class.”

According to the introduction of the archaeological master, the background of these ladies is really unusual:

“The one on the far left is Gan Tang Baomei, the eighth woman named Tang Shaoyi, the first prime minister of the Beiyang Government.(1902~1941) (After divorcing and remarrying, she changed her surname to Cen ) is the organizer of the first large-scale fashion show in Chinese history. The one on the far right is Pauline Fan, the wife of the famous Shanghai architect Mr. Fan Wenzhao(1909~1984) is a well-known person in the Shanghai beach social circle during the Republic of China. The English-speaking narrator in the middle is Wen Shihuizhen, the wife of the cousin of the three Song sisters.(1906~2004) Her husband is Mr. Wen Yuqing, a top expert in radio communication technology in China.”

Let us take a look at the original video below:


How is the video made? Used 3 open source AI tools

There are three AI programs open sourced on GitHub. The first one is DAIN for frame supplementation, the second is ESRGAN for resolution enhancement, and the third is for coloring. DeOldify, combined together to make this video.

In addition, I used the paid VirtualDub to do some noise reduction for old movies, and many other plug-ins.

DAIN: Depth-aware video frame interpolation (DAIN) model, which explicitly detects occlusion by exploring depth information. The project developed a depth-sensing fluid projection layer, preferably sampling closer objects to synthesize intermediate stream interpolation video frames.

ESRGAN: ESRGAN is improved from SRGAN, mainly used for video super resolution. In contrast to SRGAN’s deep model is becoming more and more difficult to train, the deeper ESRGAN model can achieve excellent performance through simple training. The core point is the network interpolation strategy that balances visual quality and peak signal-to-noise ratio. .

DeOldify: DeOldify uses NoGAN for training. NoGAN is essential for obtaining stable and colorful images. NoGAN training combines the benefits of GAN (beautiful coloring), while eliminating side effects(such as flashing objects in the video). Video rendering uses isolated image generation, without any time modeling of Skyrim.

In the interview, Otani said that the traditional manual restoration of video relies on the painter’s hand-painted frame to color, while AI uses the same working logic, but AI’s calculation speed and accumulation are much faster.

Otani believes that the colors in the film are only trained by the AI ​​itself and are relatively light, but it is difficult to achieve complete historical accuracy, while human artists will restore them based on the history of the time, so they will be more accurate.

Where is Dagu? Post-90s full stack artist

Dagu was born in Beijing in 1991(28 years old), received a master’s degree in computer art from the School of Visual Arts in New York. As an artist, musician, programmer and independent game designer, productivity really explodes.

Take a look at his works, including Steam games, original game music, sketch hand-painted, animation, etc. various planes.

You may have heard more about full-stack engineers, what about full-stack artists? Games, comics, 3D, VR, and music are all proficient. Of course, running a few AI models on the basis of open source is also easy to grasp.

In addition to using these open-source AI models, Otani also combines superb post-production skills, so that the life in Beijing and the Shanghai fashion show a hundred years ago can appear vividly in front of people.


Reference link:

https://b23.tv/Q6D412

https://m.weibo.cn/1618051664/4534581762203262

This article is from WeChat official account:Xin Zhiyuan (ID: AI_era), author: new Ji-won