ChaoticNeutralCzech

joined 2 years ago
 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: The Satellite-girl on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original

This is the Horizon satellite from Random-tan Studio's cybermoe comic Sammy, page 18, prior to remastering.

 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: Watchers on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea ๐Ÿ‡ฐ๐Ÿ‡ท and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: Blimp on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea ๐Ÿ‡ฐ๐Ÿ‡ท and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: Frostpunk steam vehicle on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original

See also: Automaton

Thanks to @BonerMan@ani.social for identifying the steam engine!

26
KV-5 (Random-tan Studio) (files.catbox.moe)
submitted 7 months ago* (last edited 7 months ago) by ChaoticNeutralCzech@lemmy.one to c/morphmoe@ani.social
 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: KV-5 on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original

At this point, it's becoming pretty clear that the suggestions have been overtaken by objects with prominent spherical features.

There should be four in the front and four in the back to match the number of hemispheres.

 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: Interdictor class SD on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea ๐Ÿ‡ฐ๐Ÿ‡ท and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: Milano on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea ๐Ÿ‡ฐ๐Ÿ‡ท and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: Regina on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea ๐Ÿ‡ฐ๐Ÿ‡ท and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

67
submitted 7 months ago* (last edited 7 months ago) by ChaoticNeutralCzech@lemmy.one to c/morphmoe@ani.social
 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: The Milkshake :3 on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original

The posts seem to be getting better lately

 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: Kebab-chan on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea ๐Ÿ‡ฐ๐Ÿ‡ท and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: Buran on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea ๐Ÿ‡ฐ๐Ÿ‡ท and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

[โ€“] ChaoticNeutralCzech@lemmy.one 6 points 7 months ago* (last edited 7 months ago) (2 children)

TenTh also did Crabsquid but I'm posting the images in order, it will be its turn in about 2 weeks. (Edit: here)

Here are some results from elsewhere on the internet, mostly by Dino-Rex-Makes. Feel free to feed the links to your posting script and schedule them.

Lava Larva
Pengwings
Crabsquid+Ampeel
Peeper
Peeper
Cuddlefish
Sea Monkey
Crashfish
Crashfish
Warper
Mesmer
Mesmer
Yellow Sub-MER-ine

 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: Longleg on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea ๐Ÿ‡ฐ๐Ÿ‡ท and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

[โ€“] ChaoticNeutralCzech@lemmy.one 2 points 7 months ago (1 children)

You know.

I don't... Is there a disgusting story specific to the flamethrower?

Anyway, Elon Musk's enterprises were never not full of stupid ideas. He wanted to pay for his extensive tunnel network just by selling bricks from the displaced soil. Did he expect millions of them to go for hundreds of dollars like limited-edition Supreme-branded ones? Or consider why roads were ever built on the surface if tunnels were so easy and profitable?

Around this time, he also claimed that he had perfected solar roof tiles while the demo houses actually featured no functional prototypes. The few units delivered were bad at either purpose. This didn't get nearly as much backlash as it should have but hyperloop hype was still strong back then.

[โ€“] ChaoticNeutralCzech@lemmy.one 3 points 7 months ago (2 children)

This is one of the more realistic body shapes you'll see on !morphmoe@ani.social.

If you want to block all moe communities, they are conveniently listed in the sidebar.

[โ€“] ChaoticNeutralCzech@lemmy.one 1 points 8 months ago (1 children)

In real mirror pics, the phone is always perfectly aligned with the frame (obviously).

Needs more ads plastered at weird spots.

[โ€“] ChaoticNeutralCzech@lemmy.one 11 points 8 months ago* (last edited 8 months ago)
[โ€“] ChaoticNeutralCzech@lemmy.one 1 points 8 months ago* (last edited 8 months ago)

Actually, shaggy mane (Coprinus comatus) is edible.

[โ€“] ChaoticNeutralCzech@lemmy.one 2 points 8 months ago (1 children)

Rare OC on Lemmy. Thanks for this!

A little voodoo doll version of herself on that spear... Kinky

view more: โ€น prev next โ€บ