<![CDATA[ELDRIDGE DOUBLEDAY - Ramblings]]>Wed, 15 Jan 2025 04:42:49 -0800Weebly<![CDATA[Stock buybacks, innovation, and the inexorable death of reason.]]>Mon, 02 Sep 2019 07:00:00 GMThttp://eldridgedoubleday.com/ramblings/stock-buybacks-innovation-and-the-inexorable-death-of-reasonA quick search of our good friend Google will reveal multiple pages of results wherein various experts will opine about the failure of companies to innovate (while alternately cheerleading those that do (or give the impression that they do)).

While true that more and more companies are investing in innovation incubation, this incubation surfaces in a variety of forms. In some cases, it appears as dedicated time to work on new projects apart from the day-to-day (Google made this approach spuriously famous), while others are creating dedicated organizations to drive new products and business models (the Ford/IDEO collaboration). Not to be outdone, seemingly every consulting organization on the globe has created an innovation playbook (including my former employer) with a step-by-step guide to fostering a culture of innovation.

However, for all the talk and showmanship, much of this fabled innovation is vaporware, floating discordantly in a vacuum of its own creation, never to see the light of day, while some “innovations” are fraudulent from the start. Often even legitimate innovation initiatives limp to market only after being beaten half to death by the operational realities of the company, with each step of the development cycle slicing off a pound of flesh until what remains is a pale reflection of the original intention. There are manymany reasons this happens, and these failures often reflect the absence of internal capabilities to successfully deliver to market (people), a lack of organizational readiness (process), and technical deficits that prevent the realization of the product or service (technology).

I don’t intend to add to an already oversaturated field of expert opinion on the subject of innovation. The failings of companies to innovate are often rooted in their failings to operate effectively in their current environments (the threefold people/process/technology conundrum above). As corporations have grown cumbersome and unwieldy, a number of reasons exist to inhibit effective innovation, but I shall endeavor to point to one culprit, that at the very least has enabled the others: stock buybacks.

Although stock buybacks are very much in the news in the wake of the Republican tax cuts and soaring corporate profits, they’ve been around for a while now. Although never explicitly illegal, stock buybacks remained taboo for nearly half a century (for good reason) until the Reagan-appointed head of the SEC, John Shad, introduced a new rule that essentially gave companies free reign to go nuts. If you’re a billionaire investor or the head of a major corporation, then this has worked out pretty well for you. If you’re pretty much anyone else, then maybe not so much.

So, what do stock buybacks have to do with innovation?

Easy: stock buybacks divert capital spending away from reinvestment in infrastructure (technology), wage growth and the ability to attract and retain talent (people), and sustaining a culture long-term strategic thinking and enablement (process). For those of us working in these organizations, many of our collective woes stem from one, if not all, of the above. And although we’ve seen moderate increases in capital spending in the wake of the 2018 tax cuts, they pale in comparison to the amount being directed towards the buybacks. Those of us on the ground in the design and development realms are often starved of the capabilities necessary to effectively deliver the experiences we design to market (yet are held accountable when they fail to deliver the intended financial results).
You’ve likely encountered this in your own work, where great products and services fail to launch (or scale) due to a lack of available infrastructure – manifesting as disconnected systems, siloed architecture and data structures, lack of organizational agility, or absence of the required skills to realize the vision in an increasingly complex environment. The punchline in all of this is the chief executive who rants and raves about their organization’s ability to deliver “innovative” products and services while redistributing capital to investors (and themselves).

Further, buybacks reflect the cultural deficits currently at play in America, where self-obsessed investors, who produce absolutely nothing of value, have an increasing amount of control over the destinies of major corporations and profit on the backs of the workers. These investors, who seek only immediate returns for their investments, stifle innovation (or the ability to effectively deliver on innovation programs) by encouraging (in the Game of Thrones sense) companies to prioritize the short-term at the expense of long-term strategy.

None of this is tenable. The average lifespan of an S&P 500 company in 1950 was 60 years. Today it is around 20 years, and by 2027 it is expected to be 12 years. Of course, investors don’t care – they’ll move on to the next thing like the parasites they are. For the rest of us, this poses significant problems for our future stability. Remember this the next time your leadership criticizes your ability to innovate. Remember that they are the one actively redistributing the resources required for you to do so.

And they’re making a killing doing it.
]]>
<![CDATA[You, me, and AI makes three.]]>Mon, 01 Jul 2019 07:00:00 GMThttp://eldridgedoubleday.com/ramblings/you-me-and-ai-makes-threeWith each passing year, I more deeply wrap myself in a veil of irony when considering my graduate field of study. This is not a condemnation, necessarily, but more a realization that my first field of study more adequately prepared me for my career than graduate school ever could. While the field of design continues to validate its necessity in these complex times, what is more urgent in this time of exponential technological advancement is a nuanced understanding of the factors, processes, and constructs that dictate and define culture. Anthropologists save the day again.
With the rapid evolution and dissemination of artificial intelligence capabilities across all industries, we’re collectively reaching a tipping point where sophisticated machine learning algorithms steadily dictate how we engage, interact with, and perceive the world around us while automating many aspects of our daily lives (or replacing us outright). These forces will be extremely disruptive and will have irrevocable effects on we, the hairless apes wandering the wilds of the new AI-enabled technocracy (claptrap alert).
I won’t waste time moralizing the growth of artificial intelligence and automation as there are many, many conflicting opinions on the role AI will play in the future (for and against) formed by people much smarter than I. In my weaker moments, I too am enamored by the notion of a utopian society where AI-enabled automation frees us to pursue more meaningful endeavors as machines take on perfunctory low-order tasks. I tend to fall more on the “change spectrum”, where liberation from our daily drudgery is not removed outright, but rather the nature of our efforts change to fit our new reality. I, like many others, fear the rise of the killbots, but am far more concerned about the tedium this new reality could create –  a far greater threat to global stability and well-being.

Of course, none of this is new. Fear of technology replacing human jobs and their corresponding stability is a hallmark of the post-industrial world. It’s deeply embedded in American folklore. In some cases, the resistance is an assiduous stubbornness born from our own manufactured beliefs in the virtue of effort. In most, it’s the very real and practical fear that one will no longer be able to provide for themselves or their families. Much needs to be done to enable smooth transitions into new domains, and it’s debatable whether we are currently doing enough to prepare us for an automated future. Naturally, the subject of our resistance is rooted in our own collective nature – that pioneering spirit that has always moved humankind forward, often haphazardly, is the same spirit that drives technological advancement, often devoid of consideration of the social, cultural, or economic ramifications of that evolution. This persistent push-to-the-new approach, with its much-heralded champion, innovation, pre-supposes that all technological advancement is inherently good (without even a passing acknowledgment of the horrible things (pithy) that have occurred as the result of technological advancement devoid of thoughtful consideration of the possible outcomes).

Further, this advancement, or what can, often laughably, be referred to as progress, only appears as such in the eyes of its progenitors. This mismatch is the result of the biases at play in the creators of new technologies, and how those biases manifest in the technology itself (and become thrust upon the rest of us). Our biases compound across different levels of societal stratification – by nation, region, locality and even familial constructs.  These biases need not be insidious by their nature. Some are just entrenched worldviews or perspectives on how the world does or should operate – sometimes formed with the best available evidence. That makes their effect no less profound. Indeed, the road to hell is often paved with good intentions. Not even the noblest and pure (often in absentia in tech) are immune, and the biases that influence their decision making will continue to cascade into the technology controlling our world, determining whether your car will decide to kill you, whether you are eligible to receive credit or loans, or worse. In an automated world, the scope and scale of these biases will have increasingly broad implications on society.

In the past two decades, we’ve seen a surge of anthropologists (and all stripe of social scientists) employed in industry (for lots of good reasons), working to bring cultural insight to the forefront in the design of products and services. Now, we need to cast our gaze inward to meaningfully assess the culture of companies. We need to understand how these cultures are influencing the delivery of products (of all stripes) and how these cultures support (intentionally or not) the insertion of bias into the products themselves. In the coming age, it is integral to shape the culture of tech companies and how they approach the design and development of artificial intelligence: To create standards and practices that promote introspection, diverse perspectives, and awareness of the way individual and collective biases influence the products and services we create. We may never remove bias from our natures, but we can at least prevent them from seeping into the technology we increasingly rely on.
]]>
<![CDATA[My data brings all the boys to the yard.]]>Sat, 01 Dec 2018 08:00:00 GMThttp://eldridgedoubleday.com/ramblings/my-data-brings-all-the-boys-to-the-yardOne of the most astounding developments in recent years, and mind you there has been no shortage of “astounding” developments, has been the elevation of the discourse around personal data in the wake of Cambridge Analytica. If it so happens you have been living in a cave for the past year, a primer before you carry on. Personal data in the news is actually not a recent development, sadly due to repeated breaches and theft of personal data caused by companies being either too lazy or too cheap to properly safeguard consumer data. What makes this year so different is the tone of the conversation, and the notion of what data is and how we, as its producers, have been removed from any decision making regarding its use.

Jaron Lanier makes a number of solid points on the ridiculousness of the current state of the connected digital landscape, but how did we get here? How did things get to this point so rapidly? I’ll point to two factors. The first is an abject misunderstanding of how the digital tools we engage with everyday function. Of the two, this one is the most forgivable. Clark’s third law notwithstanding, we as humans can’t be expected to understand the tedious details of every process on Earth. We have finite cognitive resources and disparate personal interests that drive us as individuals. Case in point: I can comfortably traverse two levels deep into a conversation on plant husbandry before my knowledge gaps become woefully apparent. And that’s fine, as plant husbandry is not a topic that particularly interests me (although I can appreciate the need for study in that area, as in many others). The same goes for most utilities – we consume but aren’t necessarily expected to understand every process along the chain of delivery (and, again, that’s fine). We place our trust in others to know these things.

The second factor, which drives much of our current woes, is the role of legislators and regulatory functions charged with establishing the rules for digital or digitally-enabled organizations. These functions, by many accounts, remain populated by individuals possessing the same lack of understanding as the rest of us, with the notable distinction that they have a responsibility to be knowledgeable (or passingly so) on the topics they are intended to regulate. Additionally, the nausea-inducing fawning of politicians concerning tech companies (or all profitable companies it seems) often results in these companies being given a pass (even in the current climate). Facebook and Google have been hammered this year, with lots of public hearings and private conversations with Congress, but what has actually happened as a result (beyond the dispatching of even more lobbyists)?

We are the producers of an ever-increasing trove of data, both as individuals and in aggregate, and that data is a veritable gold-mine for these companies (yet one could argue that our personal data is, in fact, a fictitious commodityas it is the backbone of our new, digitally-enabled identity). In the more discrete parlance, data is our digital effluence – the inevitable byproduct of a new lifeway. This prompts a larger question in the context of such unsavory metaphors: who owns our waste? In the current model, companies are generating large returns collecting and processing our data, with none of that tangible value returning to the producer. This is, at the least, an inequitable value exchange, and, at worst, unadulterated thievery.
The common, and controversial, argument from the data purveyors is that free access to the platform is the trade-off for data harvesting and commodification. Even if that logic were to hold, it would still require more transparency into how our data is being used than is currently provided. At the very least, we must be provided more explicit opportunities to manage the type of personal data we are allowing companies to harvest, and even more so around the type of data we are allowing them to bundle and sell. GDPR rules in Europe the CCPA in CA go a long way towards reigning in the lazy approach to security of personal data most companies have taken as they’ve transitioned into digital businesses, which is a good thing, but does nothing to address the notion of data ownership.

Lanier takes a huge step further by attempting to define new economies built around a proprietary ownership model of our own data. In an oversimplification, this is a royalty model where our data generates a return for each use, and we maintain explicit control of where, when, and how it is captured and utilized.  Although lots of upsides to this, Lanier is also quick to explore the downside, where this market creates a new data hierarchy – one where socio-economic status drives definitions of value and leaves vulnerable populations tilting in the digital wind. Decentralizing of data through blockchain is another common thread – where our data is no longer held behind corporate firewalls where it can be picked over without prying eyeballs. This, of course, presumes that social media platforms are the only source of our data woes, a fact Facebook themselves called out in one of many defensive postures this year.

So, where do we go from here? We are most certainly in uncharted waters, and although governments, companies, and individuals are busy proposing new regulatory constructs or approaches to securing and managing our data, Pandora’s box has already been opened. At its simplest, we must be presented with clear definitions of how our data is being collected and used, as well as the tools necessary to manage across the reach of our digital presence. Additionally, we must demand accountability from the services we employ, even it means pulling the plug on companies that flippantly trade in our data. I for one believe in the development of a digital passport – a unique, individualized digital token reflecting both my digital identity and my rules for data collection and redistribution – could help. Not only would this simplify the creation of new digital credentials, it would also serve as a digital record to determine which companies are violating my terms of service (rather than the other way ‘round) and could be used as part of a new platform for redistribution of data wealth back to the producers who create it. One can dream.
]]>