Fixed Headers and Jump Links? The Solution is scroll-margin-top

The problem: you click a jump link like <a href="#header-3">Jump</a> which links to something like <h3 id="header-3">Header</h3>. That’s totally fine, until you have a position: fixed; header at the top of the page obscuring the header you’re trying to link to!

Fixed headers have a nasty habit of hiding the element you’re trying to link to.

There used to be all kinds of wild hacks to get around this problem. In fact, in the design of CSS-Tricks as I write, I was like, “Screw it, I’ll just have a big generous padding-top on my in-article headers because I don’t mind that look anyway.”

But there is actually a really straightforward way of handling this in CSS now.

h3 {
  scroll-margin-top: 5rem; /* whatever is a nice number that gets you past the header */

We have an Almanac article on it, which includes browser support, which is essentially everywhere. It’s often talked about in conjunction with scroll snapping, but I find this use case even more practical.

Here’s a simple demo:

CodePen Embed Fallback

In a related vein, that weird (but cool) “text fragments” link that Chrome shipped takes you to the middle of the page instead, which I think is nice.

The post Fixed Headers and Jump Links? The Solution is scroll-margin-top appeared first on CSS-Tricks.

Author: Chris Coyier

Continue Reading

Setting Fairness Goals with the TensorFlow Constrained Optimization Library

Many technologies that use supervised machine learning are having an increasingly positive impact on peoples’ day-to-day lives, from catching early signs of illnesses to filtering inappropriate content. There is, however, a growing concern that learned models, which generally satisfy the narrow requirement of minimizing a single loss function, may have difficulty addressing broader societal issues such as fairness, which generally requires trading-off multiple competing considerations. Even when such factors are taken into account, these systems may still be incapable of satisfying such complex design requirements, for example that a false negative might be “worse” than a false positive, or that the model being trained should be “similar” to a pre-existing model.

The TensorFlow Constrained Optimization (TFCO) library makes it easy to configure and train machine learning problems based on multiple different metrics (e.g. the precisions on members of certain groups, the true positive rates on residents of certain countries, or the recall rates of cancer diagnoses depending on age and gender). While these metrics are simple conceptually, by offering a user the ability to minimize and constrain arbitrary combinations of them, TFCO makes it easy to formulate and solve many problems of interest to the fairness community in particular (such as equalized odds and predictive parity) and the machine learning community more generally.

How Does TFCO Relate to Our AI Principles?
The release of TFCO puts our AI Principles into action, further helping guide the ethical development and use of AI in research and in practice. By putting TFCO into the hands of developers, we aim to better equip them to identify where their models can be risky and harmful, and to set constraints that ensure their models achieve desirable outcomes.

What Are the Goals?
Borrowing an example from Hardt et al., consider the task of learning a classifier that decides whether a person should receive a loan (a positive prediction) or not (negative), based on a dataset of people who either are able to repay a loan (a positive label), or are not (negative). To set up this problem in TFCO, we would choose an objective function that rewards the model for granting loans to those people who will pay them back, and would also impose fairness constraints that prevent it from unfairly denying loans to certain protected groups of people. In TFCO, the objective to minimize, and the constraints to impose, are represented as algebraic expressions (using normal Python operators) of simple basic rates.

Instructing TFCO to minimize the overall error rate of the learned classifier for a linear model (with no fairness constraints), might yield a decision boundary that looks like this:

Illustration of a binary classification dataset with two protected groups: blue and orange. For ease of visualization, rather than plotting each individual data point, the densities are represented as ovals. The positive and negative signs denote the labels. The decision boundary drawn as a black dashed line separating positive predictions (regions above the line) and negative (regions below the line) labels, chosen to maximize accuracy.

This is a fine classifier, but in certain applications, one might consider it to be unfair. For example, positively-labeled blue examples are much more likely to receive negative predictions than positively-labeled orange examples, violating the “equal opportunity” principle. To correct this, one could add an equal opportunity constraint to the constraint list. The resulting classifier would now look something like this:

Here the decision boundary is chosen to maximize the accuracy, subject to an equal opportunity (or true positive rate) constraint.

How Do I Know What Constraints To Set?
Choosing the “right” constraints depends on the policy goals or requirements of your problem and your users. For this reason, we’ve striven to avoid forcing the user to choose from a curated list of “baked-in” problems. Instead, we’ve tried to maximize flexibility by enabling the user to define an extremely broad range of possible problems, by combining and manipulating simple basic rates.

This flexibility can have a downside: if one isn’t careful, one might attempt to impose contradictory constraints, resulting in a constrained problem with no good solutions. In the context of the above example, one could constrain the false positive rates (FPRs) to be equal, in addition to the true positive rates (TPRs) (i.e., “equalized odds”). However, the potentially contradictory nature of these two sets of constraints, coupled with our requirement for a linear model, could force us to find a solution with extremely low accuracy. For example:

Here the decision boundary is chosen to maximize the accuracy, subject to both the true positive rate and false positive rate constraints.

With an insufficiently-flexible model, either the FPRs of both groups would be equal, but very large (as in the case illustrated above), or the TPRs would be equal, but very small (not shown).

Can It Fail?
The ability to express many fairness goals as rate constraints can help drive progress in the responsible development of machine learning, but it also requires developers to carefully consider the problem they are trying to address. For example, suppose one constrains the training to give equal accuracy for four groups, but that one of those groups is much harder to classify. In this case, it could be that the only way to satisfy the constraints is by decreasing the accuracy of the three easier groups, so that they match the low accuracy of the fourth group. This probably isn’t the desired outcome.

A “safer” alternative is to constrain each group to independently satisfy some absolute metric, for example by requiring each group to achieve at least 75% accuracy. Using such absolute constraints rather than relative constraints will generally keep the groups from dragging each other down. Of course, it is possible to ask for a minimum accuracy that isn’t achievable, so some conscientiousness is still required.

The Curse of Small Sample Sizes
Another common challenge with using constrained optimization is that the groups to which constraints are applied may be under-represented in the dataset. Consequently, the stochastic gradients we compute during training will be very noisy, resulting in slow convergence. In such a scenario, we recommend that users impose the constraints on a separate rebalanced dataset that contains higher proportions from each group, and use the original dataset only to minimize the objective.

For example, in the Wiki toxicity example we provide, we wish to predict if a discussion comment posted on a Wiki talk page is toxic (i.e., contains “rude, disrespectful or unreasonable” content). Only 1.3% of the comments mention a term related to “sexuality”, and a large fraction of these comments are labelled toxic. Hence, training a CNN model without constraints on this dataset leads to the model believing that “sexuality” is a strong indicator of toxicity and results in a high false positive rate for this group. We use TFCO to constrain the false positive rate for four sensitive topics (sexuality, gender identity, religion and race) to be within 2%. To better handle the small group sizes, we use a “re-balanced” dataset to enforce the constraints and the original dataset only to minimize the objective. As shown below, the constrained model is able to significantly lower the false positive rates on the four topic groups, while maintaining almost the same accuracy as the unconstrained model.

Comparison of unconstrained and constrained CNN models for classifying toxic comments on Wiki Talk pages.

Intersectionality – The Challenge of Fine Grained Groups
Overlapping constraints can help create equitable experiences for multiple categories of historically marginalized and minority groups. Extending beyond the above example, we also provide a CelebA example that examines a computer vision model for detecting smiles in images that we wish to perform well across multiple non-mutually-exclusive protected groups. The false positive rate can be an appropriate metric here, since it measures the fraction of images not containing a smiling face that are incorrectly labeled as smiling. By comparing false positive rates based on available age group (young and old) or sex (male and female) categories, we can check for undesirable model bias (i.e., whether images of older people that are smiling are not recognized as such).

Comparison of unconstrained and constrained CNN models for classifying toxic comments on Wiki Talk pages.

Under the Hood
Correctly handling rate constraints is challenging because, being written in terms of counts (e.g., the accuracy rate is the number of correct predictions, divided by the number of examples), the constraint functions are non-differentiable. Algorithmically, TFCO converts a constrained problem into a non-zero-sum two-player game (ALT’19, JMLR’19). This framework can be extended to handle the ranking and regression settings (AAAI’20), more complex metrics such as the F-measure (NeurIPS’19a), or to improve generalization performance (ICML’19).

It is our belief that the TFCO library will be useful in training ML models that take into account the societal and cultural factors necessary to satisfy real-world requirements. Our provided examples (toxicity classification and smile detection) only scratch the surface. We hope that TFCO’s flexibility enables you to handle your problem’s unique requirements.

This work was a collaborative effort by the authors of TFCO and associated research papers, including Andrew Cotter, Maya R. Gupta, Heinrich Jiang, Harikrishna Narasimhan, Taman Narayan, Nathan Srebro, Karthik Sridharan, Serena Wang, Blake Woodworth, and Seungil You.

Author: Google AI

Continue Reading

AI is Changing the Pattern for How Software is Developed

By AI Trends Staff  

Software developers are using AI to help write and review code, detect bugs, test software and optimize development projects. This assistance is helping companies to deploy new software more efficiently, and to allow a new generation of developers to learn to code more easily. 

These are conclusions of a recent report on AI in software development published by Deloitte and summarized in a recent article in Forbes. Authors David Schatsky and Sourabh Bumb describe how a range of companies have launched dozens of AI-driven software development tools over the past 18 months. The market is growing with startups raising $704 million in the year ending September 2019.  

The new tools can be used to help reduce keystrokes, detect bugs as software is being written and automate many of the tests needed to confirm the quality of software. This is important in an era of increasing reliance on open source code, which can come with bugs. 

While some fear automation may take jobs away from coders, the Deloitte authors see it as unlikely.  

“For the most part, these AI tools are helping and augmenting humans, not replacing them,” Schatsky stated. “These tools are helping to democratize coding and software development, allowing individuals not necessarily trained in coding to fill talent gaps and learn new skills. There is also AI-driven code review, providing quality assurance before you even run the code.” 

A study from Forrester in 2018 found that 37 percent of companies involved in software development were using coding tools powered by AI. The percentage is likely to be higher now, with companies such as Tara, DeepCode, Kite, Functionize and Deep TabNine and many others providing automated coding services. 

Success seems to be accelerating the trend. “Many companies that have implemented these AI tools have seen improved quality in the end products, in addition to reducing both cost and time,” stated Schatsky.  

The Deloitte study said AI can help alleviate a chronic shortage of talented developers. Poor software quality cost US organizations an estimated $319 billion last year. The application of AI has the potential to mitigate these challenges. 

Deloitte sees AI helping in many stages of software development, including: project requirements, coding review, bug detection and resolution, more through testing, deployment and project management.     

IBM Engineer Learned AI Development Lessons from Watson Project 

IBM Distinguished Engineer Bill Higgins, based in Raleigh, NC, who has spent 20 years in software development at the company, recently published an account on the impact of AI in software development in Medium.  

Organizations need to “unlearn” the patterns for how they have developed software in the past. “If it’s difficult for an individual to adapt, it’s a million times harder for a company to adapt,” the author stated.   

Higgins was the lead for IBM’s AI for developers mission within the Watson group. “It turned out my lack of personal experience with AI was an asset,” he stated. He had to go through his own learning journey and thus gained deeper understanding and empathy for developers needing to adapt.  

To learn about AI in software development, Higgins said he studied how others have applied it (the problem space) and the cases in which using AI is superior to alternatives (the solution space). This was important to understanding what was possible and to avoid “magical thinking.” 

The author said his journey was the most intense and difficult learning he had done since getting a computer science degree at Penn State. “It was so difficult to rewire my mind to think about software systems that improve from experience, vs. software systems that merely do the things you told them to do,” he stated.  

IBM developed a conceptual model to help enterprises think about AI-based transformation called the AI Ladder. The ladder has four rungs: collect, organize, analyze and infuse. Most enterprises have lots of data, often organized in siloed IT work or from acquisitions. A given enterprise may have 20 databases and three data warehouses with redundant and inconsistent information about customers. The same is true for other data types such as orders, employees and product information. “IBM promoted the AI Ladder to conceptually climb out of this morass,” Higgins stated.  

In the infusion stage, the company works to integrate trained machine learning models into production systems, and design feedback loops so the models can continue to improve from experience. An example of infused AI is the Netflix recommendation system, powered by sophisticated machine learning models. 

IBM had determined that a combination of APIs, pre-built ML models and optional tooling to encapsulate, collect, organize and analyze rungs of the AI ladder for common ML domains such as natural language understanding, conversations with virtual agents, visual recognition, speech and enterprise search. 

For example, Watson’s Natural Language Understanding became rich and complex. Machine learning is now good at understanding many aspects of language including concepts, relationships between concepts and emotional content. Now the NLU service and the R&D on machine learning-based natural language processing can be made available to developers via an elegant API and supporting SDKs. 

Thus developers can today begin leveraging certain types of AI in their applications, even if they lack any formal training in data science or machine learning,” Higgins stated.  

It does not eliminate the AI learning curve, but it makes it a more gentle curve.  

Read the source articles in Forbes and  Medium.  

Author: Allison Proffitt

Continue Reading

Digital Marketing News: How B2B Buyers Pick Vendors, BuzzSumo’s New YouTube Insights, ABM Brings B2B Improvements, & How Brands Use Direct Messaging

2020 February 21 MarketingCharts Chart

2020 February 21 MarketingCharts Chart

B2B buyers consume an average of 13 content pieces before deciding on a vendor
B2B buyers say that they take in an average of 13 pieces of online content before choosing a vendor, consisting of eight vendor-made and five third-party elements, according to newly-released survey data from FocusVision, containing a number of additional insights of interest to digital marketers. Marketing Land

B2B Firms Are Failing To Integrate Sales, Marketing And Customer Success Teams: Study
Revenue growth is a top challenge for B2B brands, and some 84 percent of B2B sales and marketing professionals say that revenue responsibility rests with both sales and marketing, however 37 percent say the two functions aren’t optimally aligned — some of the findings in new survey data from Leandata and Sales Hacker. MediaPost

Content Analysis App BuzzSumo Adds YouTube Insights
BuzzSumo’s array of content data tools recently expanded with an addition that brings YouTube content discovery, performance data, and YouTube influencer information to the platform, with its new YouTube Analyzer feature, the firm announced. Social Media Today

The State of B2B Account-Based Marketing
Account-based marketing (ABM) has driven success for B2B marketers, and some 92 percent plan to ramp up the scale of their ABM campaigns during the coming year, some of the many findings of interest to B2B marketers in a recently-released report from the Information Technology Services Marketing Association (ITSMA) and the ABM Leadership Alliance. MarketingProfs

70% of Customers Believe In-App Chat Helps to Simplify Customer Experience, Reveals UJET Report
Offering in-app chat features helps customer experience according to 70 percent of consumers in recently-released survey data, with the majority also wanting the ability to upload images and screen-shots to help explain their needs, the survey noted. MarTech Advisor

FTC votes to review influencer marketing rules & penalties
The U.S. Federal Trade Commission will reexamine its recommendations, rules, and penalties relating to influencer marketing in its non-binding Endorsement Guides. The FTC said it will also look at how influencer marketing affects and is understood by children. TechCrunch

2020 February 21 Statistics Image

How Email Responsiveness Builds Trust [Infographic]
Despite or perhaps because of its long digital history, U.S. consumer email use still sits around 90 percent, with people aged 24-44 using it most at 93 percent, while 33 percent of Gen X and Baby Boomers expect an email response in less than one hour — some of numerous statistics of interest to marketers in newly-released infographic data. Social Media Today

California AG Publishes Updated CCPA Regs With Far More Clarity Than The First Draft
With implementation and compliance of the California Consumer Privacy Act (CCPA) still far from universal, the California attorney general’s office recently released a new regulations draft document and has solicited feedback from marketers and others through February 25, 2020. AdExchanger

How brands are using Apple, Google, Facebook and texting to chat up consumers
Care is key for brands using direct messaging to connect with consumers, and AdAge looks at how Google, Apple and other large brands are skipping the need for custom apps and turning instead to direct messaging and mobile marketing. AdAge

10 LinkedIn Stats to Guide Your Social Media Marketing Strategy in 2020 [Infographic]
LinkedIn (client) has an average user session length of over six minutes, with 52 percent of buyers listing the Microsoft-owned platform as the most influential channel during the research process — some of the information contained in a recent infographic. Social Media Today

Instagram overtakes Facebook in audience of top 50 brands, study says
Instagram advertising spending overtook that of parent company Facebook when it comes to 50 top brands, according to newly-released report data, with influencer marketing also showing swift growth. Mobile Marketer

Here’s How PR Pros Are Using Social Listening
67 percent of global marketing communications professionals now see influencer marketing within their scope, while some 77 percent say the same about content marketing, and 56 percent about search engine optimization (SEO), according to newly-released marcomm survey data from Talkwalker. MarketingCharts


2020 February 21 Marketoonist Comic

A lighthearted look at the art of project management by Marketoonist Tom Fishburne — Marketoonist

Check the attic! These 8 old tech items could be worth a lot of money — USA Today


  • Lee Odden / TopRank Marketing — TopRank References Brian Solis as an Influencer Role Model in Its New List, 5 Key Traits of the Best B2B Influencers — Brian Solis
  • Lee Odden — 25 Experts Share Their Tips on Winning the Hearts of Your Customers in 2020 — Nimble
  • Nick Nelson — What’s Trending: Organize and Focus — LinkedIn (client)
  • Lee Odden — How to Curate Social Media Content Like a Professional — Business 2 Community
  • TopRank Marketing — 30 Leading Influencer Marketing Agencies to Work With In 2020 — Influencer Marketing Hub

Do you have your own top B2B content marketing or digital advertising stories from the past week? Please let us know in the comments below.

Thanks for making time to join us, and we hope you’ll return next week for a new selection of the most relevant B2B and digital marketing industry news. In the meantime, you can follow us at @toprank on Twitter for even more timely daily news. Also, don’t miss the full video summary on our TopRank Marketing TV YouTube Channel.

The post Digital Marketing News: How B2B Buyers Pick Vendors, BuzzSumo’s New YouTube Insights, ABM Brings B2B Improvements, & How Brands Use Direct Messaging appeared first on Online Marketing Blog – TopRank®.

Author: Lane Ellis

Continue Reading