What is new in Malaysia’s New Economic Model?


Prime Minister Najib has announced the broad outline of the proposed New Economic Model (NEM) at the Invest Malaysia conference.

Malaysia’s New Economic Model proposes a number of strategic reforms.

The objective of the NEM is for Malaysia to join the ranks of the high-income economies, but not at all costs. The growth process needs to be both inclusive and sustainable. Inclusive growth enables the benefits to be broadly shared across all communities. Sustainable growth augments the wealth of current generations in a way that does not come at the expense of future generations.

A number of strategic reform initiatives have been proposed. These are aimed at greater private initiative, better skills, more competition, a leaner public sector, pro-growth affirmative action, a better knowledge base and infrastructure, the selective promotion of sectors, and environmental as well as fiscal sustainability.

The next step of the process will be a public consultation to gather feedback on the key principles and afterwards the key recommendations will be translated into actionable policies.

The NEM represents a shift of emphasis in several dimensions:

  • Refocusing from quantity to quality-driven growth. Mere accumulation of capital and labor quantities is insufficient for sustained long-term growth. To boost productivity, Malaysia needs to refocus on quality investment in physical and human capital.
  • Relying more on private sector initiative. This involves rolling back the government’s presence in some areas, promoting competition and exposing all commercial activities (including that of GLCs) to the same rules of the game.
  • Making decisions bottom-up rather than top-down. Bottom-up approaches involve decentralized and participative processes that rest on local autonomy and accountability —often a source of healthy competition at the subnational level, as China’s case illustrates.
  • Allowing for unbalanced regional growth. Growth accelerates if economic activity is geographically concentrated rather than spread out. Malaysia needs to promote clustered growth, but also ensure good connectivity between where people live and work.
  • Providing selective, smart incentives. Transformation of industrial policies into smart innovation and technology policies will enable Malaysia to concentrate scarce public resources on activities that are most likely to catalyze value.
  • Reorienting horizons towards emerging markets. Malaysia can take advantage of emerging market growth by leveraging on its diverse workforce and by strengthening linkages with Asia and the Middle East.
  • Welcoming foreign talent including the diaspora. As Malaysia improves the pool of talent domestically, foreign skilled labor can fill the gap in the meantime. Foreign talent does not substract from local opportunities–on the contrary, it generates positive spill-over effects to the benefit of everyone.

Overall, the New Economic Model demonstrates the clear recognition that Malaysia needs to introduce deep-reaching structural reforms to boost growth. The proposed measures represent a significant and welcome step in this direction. What will matter most now is the translation of proposed principles into actionable policies and the strong and multi-year commitment to implement them.

http://blogs.worldbank.org/eastasiapacific/node/2887

———————————————————————————————————————————————-

Malaysia’s ‘New Economic Model’

KUALA LUMPUR, March 30 — Malaysian Prime Minister Datuk Seri Najib Razak today unveiled a raft of economic measures that he said would propel this Southeast Asian country to developed nation status by 2020.

Following are some of the highlights of what he announced:

•    State investor Khazanah to sell 32 percent stake in Pos Malaysia.

•    To list stakes in two Petronas units.

•    Facilitate foreign direct and domestic direct investments in emerging industries/sectors.

•    Remove distortions in regulation and licensing, including replacement of Approved Permit system with a negative list of imports.

•    Reduce direct state participation in the economy.

•    Divest GLCs in industries where the private sector is operating effectively.

•    Strengthen the competitive environment by introducing fair trade legislation.

•    Set up an Equal Opportunity Commission to cover discriminatory and unfair practices.

•    Review remaining entry restrictions in products and services sectors.

•    Phase out price controls and subsidies that distort markets for goods and services

•    Apply government savings to a wider social safety net for the bottom 40 per cent of households, prior to subsidy  removal.

•    Have zero tolerance for corruption

•    Create a transformation fund to assist distressed firms during the refom period.

•    Easing entry and exit of firms as well as high skilled workers.

•    Simplify bankruptcy laws pertaining to companies and individuals to promoteo vibrant entrepreneurship.

•    Improve access to specialised skills.

•    Use appropriate pricing, regulatory and strategic policies to manage non-renewable resources sustainably.

•    Develop a comprehensive energy policy.

•    Develop banking capacity to assess credit approvals for green investment using non-collateral based criteria.

•    Liberalise entry of foreign experts specialising in financial analysis of viability of green technology projects.

•    Reduce wastage and avoid cost overrun by better controlling expenditure.

•    Establish open, efficient and transparent government procurement process.

•    Adopt international best practices on fiscal transparency. — Reuters

Source: http://www.themalaysianinsider.com/index.php/business/58004-malaysias-new-economic-model

————————————————————————————————————————————–

Related articles:

Malaysia must act now to retain competitiveness

Malaysia Star – 10 hours ago

The NEM report said that half a century after independence, these figures provided a sobering reminder of how far Malaysia still had to go before it could

CORRECTED-BREAKINGVIEWS-Malaysia needs to get out of its economy’s way‎ – Interactive Investor
The leap that Malaysia must make‎ – Business Times (subscription)
Income inequality remains difficult to overcome‎ – Malaysia Star
all 5 news articles »

Posted in Main. Tags: . 3 Comments »

Apple Sued Over iPad Patent Infringement


By Dan Hope, TechNewsDaily Staff Writer

With the public release of the Apple iPad looming, Elan Microelectronics, a Taiwanese chipmaker, is suing Apple, claiming many Apple products infringe on its multitouch patents.

Elan has asked the International Trade Commission (ITC) to ban imports of the iPhone, iPod Touch, MacBook, Magic Mouse and even the yet-to-be-released iPad.

“We have taken the step of filing the ITC complaint as a continuation of our efforts to enforce our patent rights against Apple’s ongoing infringement. A proceeding in the ITC offers a quick and effective way for Elan to enforce its patent,” the company said in a statement.

Elan says it owns patents covering “touch-sensitive input devices with the ability to detect the simultaneous presence of two or more fingers,” which is exactly what these Apple products do. Apple has not released a formal response to the lawsuit yet.

This isn’t the first time Elan has sued over its mutltitouch patent. Two years ago it sued Synaptics in a similar case. Synaptics ended up entering a licensing deal with Elan, but it’s not a foregone conclusion that Apple will do the same thing since Apple is no stranger to prolonged legal battles.

There is also an element of irony in Apple being sued for multitouch patent infringement because the company recently brought a similar suit against smartphone maker HTC. Apple said HTC phones with the Android operating system infringed on over 20 Apple patents, including some that had to do with multitouch interfaces.

The lawsuit won’t affect sales of pre-ordered iPads slated to go on sale this Saturday, many of which have already shipped.

Source:  http://newscri.be/link/1058559

Posted in Main. Tags: . Leave a Comment »

Better media links help China, India


BEIJING – Strengthened media cooperation between India and China will help improve understanding and promote more beneficial bilateral ties between the two countries, officials from both sides proposed on Tuesday.

“China and India are enjoying a relationship which is deepening and broadening,” S. Jaishankar, the Indian ambassador to China, said at the 2010 India-China Development Forum in Beijing. Jaishankar noted in his speech that both nations had witnessed some controversial and negative media coverage about each other last year, but said it was “no use blaming each other”.

Jaishankar proposed a shift in China’s focus from various media debates in India to the evaluation of the result brought about by these voices.

“Our media coverage will be more positive if we promote our relationship, and of course, a more efficient interpretation and dialogue is needed for such progress.” Wang Chen, minister of the State Council Information Office, also noted the importance of the media, as direct communication between the two peoples was limited.

“China and India together account for almost half of the world’s populationmore intensified media coverage by both countries about our progress and efforts is much needed,” he said. Wang proposed that both countries report in a more positive and all-round manner, as well as cover mutual achievements.

“We hope the media will become the window of understanding for both sides,” Wang said.

“Although both Chinese and Indian media have made great strides in recent years, the Western media still had the upper hand. China and India get to know each other through Western media outlets such as CNN and BBC, which somehow lead to misunderstanding. The media cooperation should be enhanced between the two countries.” A media cooperation committee was also proposed during the forum.

Zeng Jianhua, executive director of the Department of Asian, African and Latin American Affairs at the Chinese People’s Institute of Foreign Affairs said such a panel would help China and India put aside differences due to their different political and cultural backgrounds, and seek a common ground for mutual development.

By Hu Haiyan and Ai Yang (China Daily)  Updated: 2010-03-31 07:48
Source: http://newscri.be/link/1058551
Posted in Main. Tags: . 1 Comment »

Greenpeace: Cloud Computing Greenhouse Gas Emissions to Triple


BY Ariel Schwartz

Make IT Green

As cloud computing-fueled devices like the iPad grow in popularity, so will associated greenhouse gas emissions, according to Greenpeace’s “Make IT Green” report. The report, which dubs 2010 the Year of the Cloud, offers up a disturbing statistic: Cloud computing greenhouse gas emissions will triple by 2020.

The increase in emissions makes sense. As we increasingly rely on the cloud to store our movies, music, and documents, cloud providers will continue to build more data centers–many of which are powered by coal. Facebook, for example, recently announced that is building a data center in Oregon that will be powered mostly by coal-fired power stations, much to the chagrin of groups like Greenpeace.

The solution to the cloud computing problem is fairly obvious. Greenpeace explains in its report, “Companies like Facebook, Google, and other large players in the cloud computing market must advocate for policy change at the local, national, and international levels to ensure that, as their appetite for energy increases, so does the supply of renewable energy.” As we’ve noted before, companies like IBM, Google, and HP have already begun to make strides in cutting data center energy use. But there is still plenty of work to be done–as it stands, the cloud will use 1,963.74 billion kilowatt hours of electricity by 2020.

Source: http://newscri.be/link/1058493

Intel (finally) uncages Nehalem-EX beast


Like Itanium. But you might actually use it

By Timothy Prickett MorganGet more from this author

Intel’s switch to the Nehalem architecture was finally completed Tuesday with the launch of the Nehalem-EX Xeon 6500 and 7500 processors, the last of the Core, Xeon, and Itanium chips to get the Quick Path Interconnect and a slew of features that make Intel chips compete head-to-head with alternatives from Advanced Micro Devices. The price war at the midrange and high-end of the x64 market can now get underway, while the all-out, total price war awaits the debut of AMD’s Opteron 6100 processors in the second quarter.

Since the summer of 2008. Intel has been previewing its top-end, eight-core Nehalem-EX beast, which we now know as the Xeon X7560. As it has done with prior generations of Xeons, the Nehalem-EX line is not comprised of one or two chips, but a mix of chips with different features (clock speed, cache memory, HyperThreading, and Turbo Boost) dialed up and down to give customers chips tuned for specific workloads.

While last year’s Nehalem-EP Xeon 5500 and this year’s Westmere-EP Xeon 5600 processors are aimed at workstations or servers with two sockets, with the Nehalem-EX lineup, Intel has broadened the definition of its Expandable Server (this is apparently what EX is short for, with EP is supposed to be an abbreviation for Efficient Performance) to include two-socket machines as well as the four-socket and larger machines that prior generations of Xeon MP processors were designed for.

Intel, no doubt, would have preferred to keep the Xeon DP and Xeon MP product lines more distinct, and charged a hefty premium for machines that needed expanded processor sockets or memory capability. But server makers and their customers were having none of that. With the rapid adoption of server virtualization and the need for larger memory footprints even for two-socket boxes, the Nehalem-EX processors have been tweaked so they can be used to support very fat memory configurations on even two-socket workhorse servers. This will eat into the volume Xeon 5500 and 5600 market, to be sure, but it is better to sell a Xeon 6500 or 7500 server in a two-socket box than have a customer dump Intel for AMD.

The Xeon 6500 and 7500 processors will also blur some lines between Xeon processors and the former “flagship” Itanium processors, which were supposed to take over the desktop and server arena starting a decade ago, but have been relegated mostly to high-end servers from HP running HP-UX, NonStop, and OpenVMS at this point in their history. The Itaniums were distinct in many ways from the Xeons, but the main distinction they held was better reliability, availability, and serviceability (RAS) features than Xeons had, and on par with mainframe, RISC, and other proprietary architectures from days done by.

Intel Nehalem-EX Die ShotThe eight-core Nehalem-EX Xeon 7500 beast

But at the launch event today in San Francisco, Kirk Skaugen, vice president of the Intel Architecture Group and general manager of its Data Center Group, made no bones about the fact that the Nehalem-EX processors and their related Boxboro chipset that is shared with the Itanium 9300 processors launched in early February have common RAS features.

The new chip, explained Skaugen, has 20 new RAS features, including extended page tables and virtual I/O capabilities as well as a function that is in mainframes, RISC iron, and Itaniums called machine check architecture recovery, which allows a server to have a double-bit error in main memory and cope with it without halting the system. With Windows, Solaris, and Linux supporting these RAS features, as well as VMware’s ESX Server hypervisor, this makes servers based on the Xeon 7500s just as suitable a replacement for proprietary midrange and mainframe platforms and RISC/Unix servers as the formerly beloved Itaniums.

Skaugen said that the Nehalem-EX chips would allow server makers to create two-socket servers that support up to 512GB of main memory, nearly three times as much as AMD can do using 8GB DIMMs with the Magny-Cours Opteron 6100s announced yesterday. Intel will be able to support 1TB of main memory in a four-socket configuration, while the controller inside the Opteron 6100 only allows a four-socket machine using these chips to address a maximum of 512GB.

Skaugen rubbed it in a little that Intel’s Nehalem-EX partners had over 50 new products in rack, tower, and blade form factors, and that it had 75 per cent more four-socket designs than with any prior server chip launch in its history. A dozen OEM partners have 15 different servers in the works that will span eight or more processor sockets, and apparently some are pushing their designs up to 16, 32, or 64 sockets.

The big bad box at the Nehalem-EX launch, of course, was the Altix UV massively parallel supercomputer, which El Reg told you all about last November. The Altix UV machines allow for up to 2,048 cores (that’s 256 sockets and 128 two-socket blades) to be lashed together in a shared memory system suitable for running HPC codes. The shared global memory is not the same as a more tightly coupled symmetric multiprocessing (SMP) or non-uniform memory access (NUMA) cluster used in general purpose servers for running applications and databases. But that said, the Altix UVs are very powerful machines indeed and are intended to scale to petaflops of performance.

The Boxboro chipset that Intel is shipping as a companion to the Nehalem-EX chips supports configurations with two, four, or eight sockets gluelessly. If you want more sockets than that, you have to create your own chipsets, as HP, IBM, Silicon Graphics, and Bull have done for sure and others will no doubt follow.

But you can’t just plug any old Nehalem-EX chip into any old configuration. That would be too simple, and Intel likes to charge premiums for features, like most capitalists. Take a gander at the feeds and speeds of the Nehalem-EX lineup:

Intel Nehalem EX TableThe Intel Nehalem-EX Xeon 7500 and 6500 processors

The first thing you will notice is that there are two different families of Nehalem-EX processors. The Xeon 7500s are aimed at general-purpose workloads and offer the most socket expandability. All of these chips can be used in two-socket or four-socket boxes, and some of them can be used in eight-socket or larger machines, too. The Xeon 6500s are cut-down versions of the chips that only work in two-socket boxes and that are specially tuned for the HPC market. These chips, explained Skaugen, were optimized to have the highest bytes per floating point operation ratio while minimizing the amount of node-to-node communication among the processors in the complex.

The top-end X7560 part has eight-cores spinning at 2.26GHz, has 24MB of L3 cache on the chip, and is rated at 130 watts using Intel’s thermal design point (TDP) scale. The chip supports Turbo Boost, which allows for a core to have it cycle time jacked up if other cores are shut down when they’re not being used, and it also supports Intel’s HyperThreading simultaneous multithreading, which virtualizes the physical pipeline in the chip so it looks like two virtual pipelines to a system’s operating system and its applications. In best-case scenarios, HT can boost performance of applications by around 30 per cent. In 1,000-unit trays, the per-chip price for the X7560 is a whopping $3,692. That is exactly what Intel charged for a dual-core Montvale Itanium 2 with 24MB of L3 cache.

The X7550 drops the clocks down to 2GHz, chops the L3 cache down to 18MB, and the price comes down to $2,729, which is exactly what Intel was charging for its top-bin six-core Dunnington Xeon X7460 processor running at 2.66GHz with 16MB of L3 cache. The next part down, the X7542, jacks the clocks up to 2.66GHz, drops the cache down to 18MB, cuts out HyperThreading, and reduces the core count down to six from eight; the price drops down to $1,980.

For that same $1,980 you can get a standard 105 watt part, the E7540, running at 2GHz with six cores and that same 18MB cache. If you are willing to take lower clock speeds, you can get even cheaper standard parts, the E7530 and E7520, which cost $1,391 and $856, respectively. Intel has also cooked up two low-voltage parts, the L7555 and L7545, running at 1.86GHz and rated at 95 watts, which have eight and six cores, respectively. These are reasonably pricey chips that will no doubt be used inside Nehalem-EX blade servers where a premium is expected in exchange for extra density.

Generally speaking, the Xeon 6500 processors are cheaper than their Xeon 7500 counterparts because they have some features and functions turned off, as El Reg predicted they would last fall. This is in keeping with the general philosophy that HPC shops are super-stingy and will not pay one extra penny for a feature they don’t want and will never use.

The Nehalem-EX processors are implemented in 45 nanometer processes and have 2.3 billion transistors. ®

Source: http://newscri.be/link/1058499

Google mocks Steve Jobs with Chrome-Flash merger


Mountain View comes out of the plug-in closet

By Cade Metz in San FranciscoGet more from this author

When Steve Jobs met Google boss Eric Schmidt for coffee late last week, they may or may not have reached some common ground on certain hot-button subjects. But odds are, they didn’t see eye-on-eye on Adobe Flash. As Jobs prepares to ship his much ballyhooed Apple iPad without even the possibility of running Flash – which he calls “buggy,” littered with security holes, and a “CPU hog” – Google is actually integrating the beleaguered plug-in with its Chrome browser.

With a blog post on Tuesday, Mountain View announced that Flash has been integrated with Chrome’s developer build and that it plans to offer similar integration with its shipping browser as quickly as possible.

Google has been known to say that HTML5 is the way forward for internet applications. But clearly, it believes in the plug-in as well, and it has no intention of pushing all development into the browser proper.

“Just when we thought that Google was the champion of HTML5 they turn around and partner with Adobe on Flash to ensure that the web remains a mess of proprietary brain damage,” one netizen said in response to Google’s post.

Last summer, Google proposed a new browser plug-in API, and with today’s blog post, it also said that Adobe and Mozilla have joined this effort. “Improving the traditional browser plug-in model will make it possible for plug-ins to be just as fast, stable, and secure as the browser’s HTML and JavaScript engines,” the company said. “Over time this will enable HTML, Flash, and other plug-ins to be used together more seamlessly in rendering and scripting.

“These improvements will encourage innovation in both the HTML and plug-in landscapes, improving the web experience for users and developers alike.”

What’s more, Mountain View is developing a native code browser platform of its own, dubbed Native Client. This is already rolled into Chrome, and it will be an “important part” of the company’s browser-based Chrome operating system, set for launch in the fall.

By integrating Flash with Chrome, Google said that it will ensure users always receive the lastest version of the plug-in and that it will automatically update the plug-in as needed via Chrome’s existing update mechanism. And in the future, the company added, it will include Flash content in Chrome’s “sandbox,” which restricts the system privileges of Chrome’s rendering engine in an effort to ward off attacks.

In July, with a post to the Mozilla wiki, Google proposed an update to the Netscape Plug-in Application Programming Interface (NPAPI), the API still in use with browsers like Chrome and Firefox, and both Adobe and Mozilla are now working to help define the update.

“The traditional browser plug-in model has enabled tremendous innovation on the web, but it also presents challenges for both plug-ins and browsers. The browser plug-in interface is loosely specified, limited in capability and varies across browsers and operating systems. This can lead to incompatibilities, reduction in performance and some security headaches,” Google said today.

“This new API aims to address the shortcomings of the current browser plug-in model.”

The new setup was developed in part to make it easier for developers to use NPAPI in tandem with Native Client. “This will allow pages to use Native Client modules for a number of the purposes that browser plugins are currently used for, while significantly increasing their safety,” Google said when the new API was first announced.

Native Client and NPAPI have been brewing for months upon months, but today’s Chrome announcement would seem to be a conscious answer to Steve Jobs’ hard-and-fast stance on Flash. Presumably, the company sees this a way to ingratiate existing Flash shops who’ve been shunned by the Apple cult leader.

One of the many questions that remain is whether Chrome will give users the option of not installing Flash. With the new developer build – available here – you must enable integrated Flash with a command line flag. ®

Source: http://newscri.be/link/1058500

Google enhances website analytics


In its continuing quest to be more than just the world’s preferred search engine, Google recently added new features to its free website analysis program aimed at enterprises.

“Web Analytics is essentially a sophisticated website monitoring system,” said head of Web Analytics at Google South-East Asia Vinoaj Vijeyakumaar.

“Beyond just noting how many people visit your site, you can see what they do there and how much time they spend doing it.

“You can set and manage sales goals and receive automatic business reports based on those goals. This kind of intelligence can greatly improve productivity in any industry,” he said.

With the new enhancements, Google added about 20 preset goals to the Web Analytics repertoire.

In-depth intelligence reports have also been enhanced. However the company acknowledged that algorithms used for those reports will not be made publicly available.

To help enterprises get the most out of Web Analytics, Google has appointed “authorised consultants” who are certified by the company to train staff members in how to use the program.

“We have three authorised consultants based in Singapore and we hope to open one in Malaysia very soon,” said head of communications for Google South-East Asia Dickson Seow.

“Knowing how to use all the features in the most effective manner can help online traders stay ahead of the game.” For more information, surf to www.google.com/analytics.

Source: By STEFAN NAIDU intech@thestar.com.my

Sun’s IBM-mainframe flower wilts under Oracle’s hard gaze


By Gavin Clarke in San FranciscoGet more from this author

Posted in Operating Systems, 29th March 2010 23:46 GMT

Larry Ellison likes to buzz rotten fruit off some corporate type’s head. Over the years Microsoft, PeopleSoft, BEA Systems, SAP, and Red Hat have lined up to be been duly pelted during calls with Wall St or during Ellison’s company’s mega OpenWorld customer and partner conference.

It’s all good theater in the crucible of Silicon Valley, but it’s theater nonetheless, and a form of performance that will always have a shallow veneer. When there’s money involved, you can say what you want about your rivals during a conference call – it’s just words.

For example: almost two-thirds of SAP implementations run on Oracle’s database, which means SAP – a company regularly pilloried by Ellison – actually translates into big money and helps keep Oracle’s chief executive in yachts.

Turning to Oracle’s acquisition of Sun Microsystems, then, it’s with some justification that those people involved in technologies that were spun up by Sun during its era of a thousand blooming flowers and that have little visible business return on investment should now feel worried.

Users of Sun’s Project Kenai hit the panic button recently after Oracle said it was bringing Sun’s Web 2.0 code-hosting site in-house. Oracle U-turned, blaming a – ahem – “miscommunication”.

The OpenSolaris community started screaming that it was being ignored by Oracle. The giant responded to say it wasn’t ignoring them, it was just overworked getting its arms around the whole Sun thing.

To the ranks of the concerned, you can now add those working to put Solaris and OpenSolaris on IBM’s Z-series mainframe. One Solaris on Z-series supporter contacted The Reg to say:

The SystemZ port of Solaris is dead. Oracle pulled all plugs and refused to further help the authors to help. Critical parts are closed parts of libc.so.1, the core user land library which has closed source parts. Oracle now refuses to give precompiled binaries of newer versions of the closed parts to the SystemZ port community, effectively ending this port because the missing bits cannot be replicated or bypassed.

Also concerned is David Boyes, president and chief technologist of Sine Nomine Associates – the engineering firm that helped put OpenSolaris on IBM’s System Z mainframe in 2008. OpenSolaris was to become part of the main Solaris product.

Boyes told The Reg that the Sun employee working on the port has gone – chopped as the result of Ellison’s Sun employee cull – and hasn’t been replaced. Boyes is certain Oracle is not going to replace that person.

Oracle was unable to comment for this article.

On paper, the future is not too bright for Solaris or OpenSolaris on IBM’s mainframe platform. In two years of the project’s life, its been downloaded just 1,000 times – sometimes repeatedly by the same organizations. Otherwise, we’re told there are “plenty” of proofs of concept.

Boyes told us it’s wrong to say Oracle has “killed” OpenSolaris on IBM’s mainframe, but he noted the future is up for grabs as Oracle is grooming through the old Sun’s software and project assets and deciding what to do with them. The party line from Oracle here and during the recent EclipseCon and the Open Source Business Conference is that it’s still working through projects and deciding what to do.

“This is all about politics and has noting to do with technology,” Boyes said, angry that so much of this own company’s time – 20,000 to 30,000 hours – dedicated to the project could have been for nothing. “Guys who worked on the Power and Intel work outside of Sun are pretty damn pissed,” he said.

He added that while source code for OpenSolaris is still available and can still be enhanced, unless Oracle commits to putting Sun’s operating system on IBM’s Z mainframe he’ll have to put it on the back burner. “It will no longer have the priority if they make it clear this is going nowhere, and we will have to reconsider what we are doing,” Boyes said.

Boyes is right. This is political. Solaris has a future inside Oracle, on Exadata servers running Oracle’s database. Where OpenSolaris fits into that is unclear.

As for Solaris on the platform of a competitor that Ellison has taken enormous pleasure in pelting since the Sun acquisition, well – if Ellison does kill it, it won’t be for theatrical reasons. It’ll be because he’s decided he can’t make any money by having his own software run on IBM hardware.

If you want a sign of how much things have changed under the new management even at this early stage, consider this lesson from another corner of the OpenSolaris and Solaris camp.

InfoWorld has reported that Oracle has tweaked the Solaris download license, so that you can no longer download Solaris for free. You can now only use Solaris for free as part of a 90-day trial if you purchase a service contract. Under that nice Sun – but slightly stoopid Sun – all you had to do was jump through the hoops of some online survey and make sure you were smart enough to give a working email address for the download.

Yes, the flowers are wilting and anything that survives under Oracle will only bloom if it can deliver a return on Sun’s investment. ®

Google Obscures Decision Making Processes


Google’s recent action of redirecting google.cn traffic to servers in Hong Kong has raised much comment. Google’s claim is that it is doing this so it doesn’t have to follow Chinese government demands to censor what it has online.

The Chinese government, in its turn, has stated that it has the right to set the rules by which a corporation functions in its country.

In the process of this dispute, the diverse views of the Chinese Internet users, of its netizens, appear to be missing from Google’s considerations.

Some netizens in China posted an open letter to Google and to the Chinese government ministries, asking that each side in the dispute present, in an open way, what their views are so that the netizens can be part of the discussion and decision-making process.(1)

Google has ignored this request. It has done what it decided to do. It claims that this is its good deed for the world. But is it? Is Google acting with concern for netizens in China? The authors of the letter objected to the secret manner Google used to make its decision.

This situation is reminiscent of an experience that users of what is known as “Usenet” had with Google almost 10 years ago. In 2001, Google acquired from another company, from Deja.com, an archive of posts put on Usenet by its users. Deja was going out of business and allegedly sold the Usenet posts it had archived to Google.

// <![CDATA[// // <![CDATA[//

At the time a number of users of Usenet were surprised that the posts they had contributed to Usenet discussion groups had been sold from one company to another. Also at the time there were concerns about what Google might be planning to do with the posts.

An effort was made to ask Google to recognize that Usenet itself had grown up as part of a cooperative online community of users who contributed their efforts and articles to help to enrich this online community.

There was a concern among users on Usenet. Would the forms of participatory decision-making in this online community, which had been developed to involve users, be lost when a corporation like Google got involved in owning and controlling the archives of Usenet posts? Google was asked to contribute the archive or a least a copy of the archive, to a public entity that could protect it.

At the time, Google ignored these requests. Instead Google even began putting a copyright symbol on the articles in the archive, claiming that Google owned the copyright to the many contributed posts. This was contrary to the Berne Convention, the law regarding such posts. The Berne Convention, which the US agreed to respect as its copyright law as of March 1, 1989, states that the posts were the property of the users who had created them, not of Google.

Eventually Google stopped putting its copyright symbol on Usenet posts. This took quite a while, however, despite the fact that the illegal nature of Google making such a claim had been pointed out to Google soon after it started to post its copyright symbol on Usenet users posts.

The significant point of the experience that I and other Usenet users had with Google, however, is that we found that Google acted according to its own interests and its own directives. Management at Google refused to respond to users‘ concerns. In the process of this struggle I wrote an article titled “Culture Clash” which appeared on February 26, 2001 in the online magazine Telepolis describing what was happening with Google. (2)

In response to the article, I was invited to give a talk at Stanford University in California, where Sergey Brin and Larry Page, the creators of Google, had done their research on the search engine algorithm that was the basis for the Google search engine. I was told I would have a chance to debate what I had written in my article with Brin and Page at a program at Stanford. Once I arrived at Stanford, however, I was told that they would not be part of the program. Instead I could give the talk without them at Stanford and then go and speak at the corporate headquarters of Google.

I gave a talk at Stanford and then went to Google’s Mountain View headquarters and gave the talk a second time. While I appreciated having the chance to speak to, and afterwards have a discussion with, some of those working at Google at the time, neither Brin nor Page were available to participate in the program or to talk with me. Instead the person I was told I could speak to, offered no means for Usenet users to make input into the decisions-making process of Google.

Based on my experience with Google, I wrote the article “Commodifying Usenet and the Usenet Archive or Continuing the Online Cooperative Usenet Culture?” (3) The article was published in the scholarly journal “Science Studies” in January 2002.

In the article, I described how Usenet had been created by a cooperative online process. An example I gave, was when one of the pioneers involved in early Usenet development wanted to change the name of Usenet. He proposed this change to the users of Usenet. After an extended discussion it became clear that many users disagreed. The plan to change the name of Usenet was dropped. The name remained as Usenet. There were a number of other similar examples in the early days of Usenet development where users were involved in the discussion of problems and in contributing to the decisions that were made.(4)

This was, however, no longer to be the case when Google became involved with Usenet. As a result, some aspects of Usenet have survived, especially the discussion groups dealing with technical issues. A number of other discussion groups that existed on Usenet, however, were negatively affected by the ways that Google and other companies began to make various decisions, not only with respect to how Usenet was archived or searched, but also affecting other aspects of Usenet.

In trying to understand what has happened as the corporate world represented by Google and other online services began to affect the online world and the experience of netizens in this online world, it is helpful to also keep in mind Google’s own origins.

When Brin and Page were students at Stanford University working on their search engine project, they wrote a paper criticizing the commercialization of search engine research. In the paper, they proposed the need for an open laboratory approach to working on search engine design. Such an approach would allow the best results to be developed and built by the research community. Brin and Page criticized the commercial decision making processes, particularly the secrecy, lack of community input into the processes and focus on advertisements. They criticized that this had caused “search engine technology to remain largely a black art and to be advertising oriented.” (5)

The project that Brin and Page were part of had National Science Foundation (NSF) funding. US government funding during this period of the late 1990s took a turn toward promoting commercialization as opposed to supporting basic research in science and technology. The Director of the NSF, Dr. Rita Colwell, explained to the US Congress that the “transfer to the private sector of ‘people’ – first supported by the NSF at universities – should be viewed as the ultimate success” of the US government technology policy.(6)

The significance of this change was that Brin and Page became connected with the same “black art” they had critiqued as graduate students. The objectives of the Google corporate structure is not to facilitate the sharing of ideas and the communication that facilitates the best design of search engine technology that were the objectives Brin and Page advocated as researchers at Stanford.

More seriously, the vision of the Internet as a place where netizens strive to understand the problems that develop, and work together to find the solutions that will continue to foster an environment facilitating communication, is a vision the corporate entities do not share. Hence the culture clash that developed between Google and the Usenet community. Keeping in mind this persepective it is helpful to look at what Google is doing with respect to netizens in China.

The situation with regard to China’s online world is one in which there are many important discussions online among netizens. Many of China’s netizens contribute to serious discussions on issues concerning the problems in China and the world.(7) This is an important development with respect to the Internet, a development that other netizens around the world can learn from.

Instead of Google learning from what is happening in China and trying to hear what China’s netizens are saying about Google’s concerns and plans, Google acts in ways that have an effect on China’s netizens without involving them in its decision-making process.

Unfortunately, many users around the world have become dependent on Google for many of their Internet activities and are thus at the mercy of but another corporate entity that does not care for the development of the kind of cooperative communication that the Internet and netizens have nourished and endeavored to spread more broadly and widely.

What is happening in the struggle between Google and China therefore is important, as Google claims it cares for the Chinese users, but there is no evidence that Google has seen any reason to consider the views and concerns of China’s netizens. Thus Google’s decision to redirect its google.cn traffic to servers in Hong Kong is but the decision of another corporation acting on the claim that the corporation knows best. Thus the culture clash between netizens and Google continues.

Notes

(1)Chinese netizens’ open letter to the Chinese Government and Google, Draft for Discussion, Version: 0.99, March 2010

To the relevant Chinese government ministries and Google Inc.,
http://docs.google.com/View?docid=dfw7fpm7_77crfpc8fv

2.Ronda Hauben, “Culture Clash:The Google Purchase of the 1995-2001 Usenet Archive And the Online Community”, Telepolis, February 21, 2001.
http://www.heise.de/tp/english/inhalt/te/7013/1.html

3.Ronda Hauben,”Commodifying Usenet and the Usenet Archive or Continuing the Online Cooperative Usenet Culture?”,Science Studies 15:(2002),
p.61-68.
http://www.columbia.edu/~rh120/other/usenetstts.pdf

4.Ronda Hauben,Early Usenet(1981-2) Creating the Broadsides for Our Day.
http://umcc.ais.org/~ronda/new.papers/usenet_early_days.txt

5. Sergey Brin and Larry Page, “The Anatomy of a Large-Scale Hypertextual Web Search Engine”,
http://infolab.stanford.edu/pub/papers/google.pdf

6. http://www.nsf.gov/od/lpa/congress/106/rc00504approp.htm

7. Ronda Hauben,”China in the Era of the Netizen”, in Netizenblog.

http://blogs.taz.de/netizenblog/2010/02/14/china_in_the_era_of_the_netizen/

Ronda Hauben (netizen2)
Source:  http://newscri.be/link/1056655

Cradle boost for technopreneurs


Firm hopes to approve 12-24 applications for pre-seed funds

GEORGE TOWN: Cradle Fund Sdn Bhd, an agency under the Finance Ministry, is targeting to approve 12 to 24 applications for its pre-seed funds and five to 10 applications for its seed funds from Penang’s technopreneurs this year.

Chief executive officer Nazrin Hassan said Cradle would give RM150,000 in pre-seed funding to a team of two or more technopreneurs from Penang to kick start the development of their ideas.

“Subsequently, the technopreneurs can form their own company to commercialise their intellectual property or sell it to a third party,” he said after signing an Memorandum of Understanding (MoU) with Software Consortium of Penang (Scope) chairman Jeffrey Lim.

The MoU allows Scope to assist Cradle in screening and approving funds from the Cradle Investment Programme (CIP), which is Malaysia’s first development and commercialisation programme that enables budding innovators and aspiring entrepreneurs to transform their raw technology-based ideas into commercially-viable ventures.

Nazrin Hassan (right) exchanging documents with Jeffrey Lim, witnessed by Datuk Boonler Somchit

Also present was Penang Skills and Development Centre chief executive officer Datuk Boonler Somchit, who witnessed the signing.

Nazrin said for the seed grant, Cradle would be giving up to RM500,000 to a company for commercialising its products.

“To date, we have given a total of RM35mil in pre-seed grants to 387 ideas from all over Malaysia. About 50% of the ideas come from Selangor, and the rest from Penang and other parts of the country.

“Over 50% of the 387 ideas have been successfully commercialised. Some 70% of the ideas we funded were from the information and communication technology sector, while the remainder were ideas from the life and material sciences,” he said.

Meanwhile, Lim said the MoU would allow more technopreneurs from Penang to gain access to funding from the CIP.

“It serves as a catalyst for the creation and growth of a total eco-system for the development and commercialisation of high-technology business in the northern region,” he said.

By DAVID TAN davidtan@thestar.com.my

Follow

Get every new post delivered to your Inbox.

Join 1,153 other followers