Skip to content
crypto-news-Vitalik-Buterin-option001.webp.webp

Ethereum's Vitalik Buterin supports TiTok as blockchain app

According to Ethereum (ETH) co-founder Vitalik Buterin, the new image compression method Token for Image Tokenizer (TiTok AI) can encode images to a size large enough to add them onchain.

On his Warpcast social media account, Buterin called the image compression method a new way to “encode a profile picture.” He went on to say that if it can compress an image to 320 bits, which he called “basically a hash,” it would render the pictures small enough to go on chain for every user.

The Ethereum co-founder took an interest in TiTok AI from an X post made by a researcher at the artificial intelligence (AI) image generator platform Leonardo AI.

The researcher, going by the handle @Ethan_smith_20, briefly explained how the method could help those interested in reinterpretation of high-frequency details within images to successfully encode complex visuals into 32 tokens.

 Buterin’s perspective suggests the method could make it significantly easier for developers and creators to make profile pictures and non-fungible tokens (NFTs).

Solving previous image tokenization issues

TiTok AI, developed by a collaborative effort from TikTok parent company ByteDance and the University of Munich, is described as an innovative one-dimensional tokenization framework, diverging significantly from the prevailing two-dimensional methods in use. 

According to a research paper on the image tokenization method, AI enables TiTok to compress 256 by 256-pixel rendered images into “32 distinct tokens.”

The paper pointed out issues seen with previous image tokenization methods, such as VQGAN. Previously, image tokenization was possible, but strategies were limited to using “2D latent grids with fixed downsampling factors.”

2D tokenization could not circumvent difficulties in handling the redundancies found within images, and close regions were exhibiting a lot of similarities.

TiTok, which uses AI, promises to solve such an issue, by using technologies that effectively tokenize images into 1D latent sequences to provide a “compact latent representation” and eliminate region redundancy.

Moreover, the tokenization strategy could help streamline image storage on blockchain platforms while delivering remarkable enhancements in processing speed.

Moreover, it boasts speeds up to 410 times faster than current technologies, which is a huge step forward in computational efficiency.



This article was originally published by a crypto.news . Read the Original article here. .

Related Blog