This week, Alibaba Group published a voluntary update on its share repurchase program for the March quarter of the 2024 fiscal year.
In other news, Alibaba Cloud has broadened its selection of freely available collaborative resources with the introduction of a video generation toolkit on ModelScope.
Alibaba Group Increased Share Repurchase Pace During March Quarter
Alibaba repurchased 524 million ordinary shares for a total of $4.8 billion during the three months ended March 31, 2024, documents filed this week show, up from $2.9 billion during the previous quarter.
Earlier this year, the group said it would begin voluntary updates on its share repurchase program to increase transparency for investors.
Alibaba’s total shareholder yield, in the form of dividends and share repurchases, led its large-cap peers by reaching 6.5% during the 2023 calendar year.
The ramped up repurchasing program is part of a broader drive of “returning value to shareholders,” as outlined by Alibaba Group Chairman Joe Tsai last year.
“Our capital management activities are dynamic and remain a top priority for our management team and our Board of Directors,” he noted during an earnings call in November.
Under the current program effective through to the end of March 2027, the group has $31.9 billion available to fuel repurchasing activity.
Discover more Alibaba Group news
Alibaba Cloud Open Sources Toolkits for Video Generation Model Development
Alibaba Cloud revealed this week its latest open-source initiative to spur development of video generation AI models.
It open sourced a set of toolkits on its AI model community ModelScope that power the development of text-to-video models, including data processing tools, multimodal datasets, foundation models, training and inference tools.
Video generation models require massive amounts of high-quality training data and advanced processing tools for multimodal dataset.
To tackle the data processing challenge, Alibaba Cloud open sourced Data-Juicer, a one-stop data processing system that contains hundreds of dedicated video, image, audio, text, and other multi-modal data processing operators and tools.
It also open-sourced a denoising foundation model built on the basis of a small dataset. Developers can tap the foundation model for advanced model training to develop video generation models.
Since its launch, over 4 million developers have tapped ModelScope to gain access to more than 3,000 models and thousands of datasets.