WR Tweets HIV results – David Clowney has bounced around the NFL, with stops in New York, Carolina, and, currently, Buffalo. He’s had a nondescript career, mostly as a deep reserve receiver, hauling in just 22 total receptions.Well, the 25-year-old finally made headlines — and it had absolutely nothing to do with football. On Tuesday, Clowney fired off a tweet of his HIV test results, which, evidently, he received from his doctors Tuesday, and was unable to contain his excitement.
The world’s anti-doping authorities are launching a focused investigation in Jamaica following allegations that the Jamaica Anti-Doping Commission was not testing their own athletes thoroughly.According to The Associated Press:“The world’s anti-doping authority is launching an “extraordinary” audit of Jamaica’s drug-testing agency following allegations that its policing of the island’s sprinting superstars led by Usain Bolt all but collapsed in the months before they dazzled at the London Games.”Since London 2012 though, high-profile Jamaican sprinters including Asafa Powell, Sherone Simpson and Veronica Campbell-Brown have tested positive for banned substances.Former executive director of the Jamaican Anti-Doping Commission, Renee Anne Shirley, alleges that athletes were not tested outside of competitions for years leading up to the 2012 Olympics.“There was a period of — and forgive me if I don’t have the number of months right — but maybe five to six months during the beginning part of 2012 where there was no effective operation,” Shirley told The Gleaner. “No testing. There might have been one or two, but there was no testing. So we were worried about it, obviously.”Those allegations didn’t go without push back.A spokesman from the International Association of Athletics Federations insists that Jamaican athletes are not only tested by Jamaican drug testing agencies, but its own out-of-competition testing of the athletes was “robust and comprehensive” with tests carried out at Jamaican training camps.When questioned how much he gets tested, Usain Bolt replied, “Sometimes they will come like six times in one month and then you won’t see them for two months and then they come three times in one week. So I don’t really keep track. I just get drug tested when I do.”
Indeed, the Penn State streak may be the longest that is directly comparable to the current Huskies run. There doesn’t appear to be any official list of longest streaks across all sports, but the generally acknowledged record for a team winning streak is the Trinity College men’s squash team, which won 252 straight team meets from 1998 to 2012. Similarly, the University of Miami won 137 team tennis contests in 1957-64. But a team’s squash or tennis meet winning streak isn’t really the same as a game or match streak in a team sport, since each meet is a collection of smaller matches from an individual or pairs sport.Individual sports have generated streaks longer than UConn’s. Cael Sanderson of Iowa State won 159 straight matches to go 159-0 in his collegiate wrestling career (other wrestlers have won more in a row than UConn’s 109 as well). But comparing individual dominance to team dominance is suspect. And other acknowledged streaks can be even more dubious, because they come from lower divisions or carve out a subset of games played — like Mount Union winning 112 straight regular-season games in Division III football.In other words: Don’t miss Saturday’s game! The Huskies will be playing for a claim to the longest winning streak in collegiate team sports history.But things get harder from there. To win this tournament, there’s a good chance that UConn will have to beat three of the next four highest-ranked teams. Barring upsets, the Huskies would face Maryland in the Elite Eight, Baylor in the Final Four and either Notre Dame or South Carolina in the final. If the Huskies face the Irish, they’ll be matching up against the team with the second-longest current win streak in NCAA women’s basketball, 16 games.Check out our March Madness predictions. After the University of Connecticut Huskies won their 109th straight game Monday, their chances of winning the NCAA women’s basketball tournament and carrying that win streak into next season remain just about steady at 48 percent (down a tick from 49 percent after round one), according to FiveThirtyEight’s March Madness predictions.Up next for the Huskies is No. 4-seed UCLA, a team that was ranked 15th at the end of the season and that UConn hasn’t played since 2014. If UConn wins that game, it’ll have 110 straight wins, moving it clear of Penn State’s 109 consecutive wins in women’s volleyball in 2007-10 — acknowledged as one of the longest winning streaks in collegiate sports history.Both streaks are over twice as long as the next-longest streak by any other entity in their sport. Before Penn State’s mark, the longest win streak by any other team in Division I women’s volleyball was USC’s 52 games in 2002-04. The longest streak in NCAA women’s basketball by a team other than UConn was 54 games by Louisiana Tech in 1980-82. (For comparison, UCLA men’s basketball’s heralded 88-game win streak was only 28 games longer than the previous record-holder’s.) Of course, Geno Auriemma’s UConn squad also holds the second- and third-longest streaks in women’s NCAA basketball, as well as the fifth-longest, which ended one game before the current streak began.VIDEO: How the Villanova and Duke losses shook the bracket
After one of the most astonishing score lines in the history of the World Cup on Tuesday — Germany 7, Brazil 1 — nothing that happens in Sunday’s World Cup final would be a total surprise. But we do have estimates of the most likely final scores for the game.Germany is a 63 percent favorite to defeat Argentina, according to the FiveThirtyEight forecast. Argentina had a slightly higher Soccer Power Index (SPI) rating when the tournament began, but Germany has seen its rating rise, particularly after its thrashing of Brazil, and it now ranks No. 1 by some margin. Betting lines also have Germany favored.The SPI match predictor allows us to predict the number of goals scored and allowed for each club. It calls for 1.7 goals by Germany and 1.2 by Argentina.There are a couple of problems with this — for one thing, a team cannot score seven-tenths of a goal. So the match predictor uses a version of a Poisson distribution, which calculates the probability of the teams finishing with any whole-number score. For example, if Germany scores an average of 1.7 goals, how often does it score exactly two goals or exactly three goals? That’s what a Poisson distribution does.Another issue is that the match predictor is calibrated on the basis of 90-minute matches when knockout-round games can go to extra time. To account for extra-time results, we ran an additional Poisson regression based on the results of extra-time games in major international tournaments since 2005. (In geek speak, we’re nesting a Poisson distribution within another Poisson distribution.) All of that produces the following heat map:Read left to right for Germany’s score and top to bottom for Argentina’s. Boxes in which the score is still tied after extra time represent cases where the game goes to penalty kicks (there is about a 14 percent chance of this happening). The 10 most probable scores are as follows:Germany 2, Argentina 1Germany 1, Argentina 0Argentina 2, Germany 1Germany 2, Argentina 0Argentina 1, Germany 0Germany 3, Argentina 11-1 draw (game goes to penalties)Germany 3, Argentina 2Germany 3, Argentina 0Argentina 2, Germany 0What are the odds of another 7-1 scoreline? The model says there is only about a 0.06 percent probability of such a score favoring Germany (about one chance in 1,600). There’s even less of a chance — more than 10,000-to-1 against — of the same score favoring Argentina.But these figures may underestimate the chance of astonishingly lopsided results. The mathematical basis for the Poisson distribution is the assumption of independent trials. This is a little inexact (it describes a special case of a Poisson distribution called a binomial distribution), but a Poisson distribution is treating a soccer game something like this:Suppose we expect Germany to score 1.7 goals on average in a 90-minute game against Argentina. That translates into about a 2 percent probability (1 chance in 50) of scoring a goal in a given minute of play.So we can run an experiment where we randomly draw ping-pong balls from a set of 90 lottery machines, one representing each minute of the game. In each machine, there are 50 balls, one labeled GOAL! and 49 blanks. The probability of drawing a GOAL! from one machine doesn’t affect what happens with the next one. (This is the assumption of independent trials.) After we’ve drawn balls from all 90 machines, we count the number of GOAL! balls. This represents how often Germany scored in the game.We can repeat the experiment a bunch of times. Most commonly, we’ll wind up with something like one or two GOAL! balls. But other times we’ll have drawn zero or four or six. The relative frequency of these outcomes represents the Poisson distribution for Germany’s score.As strange as this experiment might seem, it isn’t a bad mathematical approximation of a soccer game. And for the most part, Poisson distributions do a good job of modeling real-world soccer scores.But there are some complications. For instance, we may have some estimate of how the absences of Neymar and Thiago Silva might affect Germany’s chances of scoring against Brazil. But there is some uncertainty around that: Maybe Brazil plays more fluidly when it isn’t waiting around for Neymar to do something, or maybe it breaks down. This is equivalent to not knowing exactly how many GOAL! balls and blanks there are in the ping-pong machines. This uncertainty will tend to slightly increase the number of extreme outcomes (Germany scoring zero goals or a lot of goals) that we observe in the real world.Another issue is that the texture of play in soccer depends to some extent on the scoreline. Play is usually tighter and more conservative in a drawn game and then opens up once the tie is broken. As a result, standard Poisson distributions slightly underestimate the chance of draws and of some wild scores, such as 5-2. (The variant of the Poisson distribution that we use is meant to address this problem.)For the most part in sports, these complications are not worth worrying about. There are cases where a Poisson distribution or a normal distribution isn’t perfect — normal distributions seem to slightly underestimate the number of extreme outlier scores in sports — but they usually hold up reasonably well. Nobody gets hurt when you say that Germany has only a 1-in-4,000 chance of winning by six goals when it actually had a 1-in-400 chance.But real-world distributions are often slightly fat-tailed, meaning that extreme outliers happen more often than the normal distribution predicts. And — outside the sports world — using the wrong model can cause real problems, underestimating the chance of an earthquake or a financial crisis.
Quick — which NBA player is most integral to his team’s offense? Which player shoulders the biggest offensive burden? And to what degree are those questions even equivalent?Statistically, such concerns fall under the umbrella of “usage rate,” a term that colloquially describes an entire class of metrics tasked with quantifying the size of a player’s offensive role. Usage is one of the most accessible concepts in basketball analytics — rock-simple in its purview and relatable to anyone who’s ever played with a shameless ball hog or been a terrified freshman playing hot potato. In statsier circles, usage is a staple of player analysis, in part because it remains relatively constant amid a player’s shifting contexts and roles. At a glance, usage says more about how a player plays than most other basic basketball metrics.One small problem: Nobody seems to agree about what exactly usage rate is, or should be, or how it is calculated. Many analytics-minded observers don’t even know there are different, competing versions of the statistic in popular use, much less that each variant has its own philosophy about what it means to “use” a possession. For a term so common to the modern hoops lexicon, that’s more than a little strange. So let’s have ourselves a little history lesson and learn much more than you ever wanted to know about usage rate, in all its permutations.Usage through the yearsLike many concepts in basketball analytics, usage rate can be traced back to Dean Oliver and John Hollinger, still probably the field’s two most influential figures. The notion that too much (or too little) offense could flow through an individual player is as old as the game itself, but it’s hard to find anyone formally putting a number on the phenomenon before the early-to-mid-2000s, when Hollinger published his inaugural “Pro Basketball Prospectus” and Oliver wrote the seminal “Basketball On Paper.” In fact, the thought of listing a player’s rate of possession-usage at all — let alone as something other than a purely negative indicator — was alien to many of the early hoops number-crunchers.To understand why, it’s useful to look back at the primordial era of basketball metrics. NBA statheads cribbed many of their early concepts from baseball’s sabermetric movement — which effectively had a 25-year head start — including a tunnel-visioned focus on maximizing efficiency. Such a fixation makes sense in baseball, where a player’s susceptibility to making outs is unambiguously negative — you get 27 of them each game, to be guarded vigilantly — and you can draw a straight line between a player’s individual efficiency and his effect on the team. Hence the reasoning, as applied to basketball: If possessions, like outs, are the sport’s fundamental unit of opportunity, why would we celebrate a player’s propensity for using them up?Basketball is more complicated than baseball, however. Possessions alternate between teams, so at least one player must always have a hand in “using” each of them. More importantly, teammates do not take turns with their opportunities like hitters going through a batting order: Any individual player is free to use as many (or as few) of the team’s possessions as he wants. This provides a lot of complex ways for an individual to help the team beyond his own personal efficiency statistics.One of Oliver and Hollinger’s key insights was that the frequency with which a player generates offense — as proxied by usage rate — is a consideration that should always accompany (and temper) his efficiency metrics. “Some guys … are great shooters and passers, and rarely turn the ball over,” Hollinger wrote, introducing usage in the 2002 edition of his “Prospectus,” predicting the wars he’d fight over Carl Landry half a decade later. “If that’s the case, why don’t people regard them as superstars? The reason is that they cannot create their own shot as often as some other players can.” Usage rate was born out of the effort to quantify said ability to create.Hollinger’s original conception of usage, which can still be found at ESPN.com today, was a relatively simple pace-adjusted rate of shots, assists and turnovers per 40 minutes. Oliver’s, while rooted in the same basic tenets, went to a far more complex place, accounting for the additional possession-extending nature of offensive rebounds and even parceling out fractional credit to the scorer and passer on an assisted basket. But at their most elemental, both attempt to individually account for all the actions that can spell an end to any team possession: made baskets, misses that aren’t rebounded by the offense, free throws and turnovers.Neither Oliver’s nor Hollinger’s interpretation of usage, however, is the preferred version of 2015’s stathead. (At least, not according to this unscientific Twitter poll I conducted Tuesday.) Among the respondents who actually recognized differences between various flavors of usage, nearly twice as many said they use the Basketball-Reference.com (BBR) version as Hollinger’s. (Oliver’s version isn’t widely available online, except for college players.)As the stats are used today, there isn’t much separating the three. Mention that a player’s usage rate or usage percentage is in the high-20s to low-30s and you call to mind a ball-dominant focal point of an offense; drop down an octave, into the low-to-mid-20s, and you instead have a player who creates a good deal of offense but doesn’t dribble the leather off the ball. Whichever version you prefer, usage is in common enough usage that it serves as shorthand for offensive hierarchy.In most every practical application, breaking one or the other down to its atomic particles and recompiling them into the competing version will be pointless; you already get the idea. Still, it remains worthwhile to understand the differences, such as they are, and how those differences inform what it is you’re looking it. Why? Because BBR’s usage metric doesn’t include assists.Confusion reignsFull disclosure: I used to work for Sports-Reference, the company that runs BBR, so I’m close to the situation. And now, a scene from my former life running the company blog at a time when BBR founder Justin Kubatko and I staged nerd fights about this (and other statistical barnacles):ME: “Why do we use Hollinger’s definition of usage instead of Dean Oliver’s?”JUSTIN: “That’s not Hollinger’s. That’s mine.”ME: “It’s not what he uses at ESPN? I thought it was the same definition.”JUSTIN: “No. His multiplies assists by a third.”ME: “I see. But I guess the question still stands.”JUSTIN: “Mine is basically percentage of team plays used. What the heck is his actually measuring?”ME: “It’s trying to measure possessions, and failing. But Oliver’s formula gives us real possessions.”JUSTIN: “They’re not real, either! They’re estimates — better than Hollinger’s, but estimates.”ME: “I’m confused. This is Hollinger’s fault.”For most players, this distinction is largely irrelevant; among qualified players1Minimum 400 minutes. this season, the correlation between BBR usage and Oliver’s more full-bodied formula is 0.98. But for certain types of players, it can matter: It’s the difference, for instance, between claiming that DeMarcus Cousins carries the league’s biggest offensive burden (as he does under BBR’s formula) and giving the distinction to Russell Westbrook (No. 1, according to Oliver and Hollinger). One measures pure scoring affinity; the others factor in ballhandling responsibility while still strictly accounting for the player(s) who served as the conduit for every possession’s end.Neither approach is perfect. Playmaking is obviously a massive part of “creating” offense, and cutting it out entirely isn’t ideal. But just stapling assists onto a scoring metric misses huge chunks of what you’re trying to capture. Plus, heavy ballhandlers tend to have higher turnover rates than would be predicted from how often they end possessions, which suggests that even a completist accounting method such as Oliver’s is missing some fundamental aspect of how passers create shots for others.So with the advent of player-tracking data from SportVU, Seth Partnow of NylonCalculus.com set out to detect the invisible. He developed a statistic called True Usage, which incorporates “assist chances” (so-called “hockey assists,” plus passes that would have been scored as assists if the shot had been made) into the usage mix. The resulting leaderboard is decidedly skewed toward point guards and other primary ballhandlers, like LeBron James. If we’re truly interested in measuring a player’s offensive burden, that probably makes for a more accurate usage framework.The problem, of course, is that the old-hat usage figures are now entrenched in not only the analytic lexicon, but also the updating leaderboards on big industry portals like Basketball-Reference and ESPN. It’s hard to change hearts and minds without first winning over the APIs.From one stat to manyThen again, maybe the entire concept of a one-number “usage rate” has outlived its usefulness, particularly in an age of hyper-detailed SportVU possession stats. We can now see how long a player holds the ball, how often he passes, how many points those passes create — every conceivable piece of the puzzle is out there, if you know where to look. And just about every basketball analytics expert I consulted told me that they preferred a modular approach to usage, with different formulas to measure different aspects of a player’s offensive responsibility.“I don’t use just one usage stat,” Oliver told me. “I do have a shot usage, a field goal usage, and a possession usage stat. Depending on the question being asked, I will look at the one that makes the most sense.”Jacob Rosen, who writes about analytics for Nylon Calculus and the Cleveland sports blog Waiting For Next Year, concurs that today’s all-in-one usage metrics are inadequate. “Like any type of basketball stat, it’s the balance of wanting to push everything into one metric,” Rosen said. “In some ideal world, you’d have a stat that measures the dimensions of possession time, passes, potential assists, turnovers, shots, free throws, etc. But they’re on somewhat different planes of existence.”As a possible alternative to a one-size-fits-all usage formula, Rosen wondered if usage rate’s next step would be to incorporate player typologies, such as the Position-Adjusted Classification (PAC) system developed by current Cleveland Cavaliers Director of Analytics Jon Nichols. “In my mind, having those different dimensions would be more accurate,” Rosen said. “You could perhaps do a PAC definition just with usage-based things alone (i.e., passing, possession, turnovers, shots).”Given the state of today’s tools of observation, Partnow’s True Usage may have struck the best balance between the all-encompassing and the customizable, if not the most widely used and understood.“To me the ideal is True Usage,” Nylon Calculus writer2And FiveThirtyEight contributor. Ian Levy said. “It is as accurate a measure as there is of the quantity of a player’s offensive responsibilities. But the real benefit is that you can parse out the different components to see what comes from playmaking, scoring, turnovers. That’s the ideal — [a] good holistic measure [that’s] also parsable into components for descriptive uses.”If so, maybe we should all just turn our attention toward rebranding campaigns for the other myriad versions of usage rate — “Possession Rate”? “Scoring Attempt Frequency”? — or pester the bosses at ESPN or Basketball-Reference for one more column in the Advanced Stats tab. That is, until basketball’s next data revolution comes and brings with it an even more accurate way to measure offensive workload … which we can promptly christen “usage rate” and start all over again.
A homegrown WAR rate of 43 percent is well below the long-term average of 63 percent for world champs, but that number is propped up by teams that won their titles before MLB’s modern era of free agency and mass player movement. Since free agency began in 1976, the average champion got about 50 percent of its WAR from homegrown players. In comparison with the highly imported nature of the 2004 Red Sox roster, the 2016 Cubs had a pretty normal mix of developed and acquired talent.Finally, the quality of the 2016 Cubs’ position players set them apart from the 2004 Red Sox, particularly on defense. Both teams received immense contributions from their respective pitching staffs; Boston ranked 14th among champions in pitching WAR,4Per 162 games. while Chicago ranked 27th. But the Cubs’ lineup also generated the 16th-most WAR by a championship team, while the Red Sox got only the 77th-most WAR of any champion from its lineup. Some of Chicago’s impressive young position-player talent flowed from a promise Epstein made at his introductory news conference in 2011. There, Epstein declared his intention to build “a foundation of sustained success” rooted in player development, echoing a similar sentiment from early in his tenure with Boston. “We’re going to turn the Red Sox into a scouting and player development machine,” he said in 2002. Although the returns didn’t come in quickly enough for the veteran Red Sox of 2004 — only 12 percent of the team’s WAR was generated by players who began their careers in Boston, the third-lowest rate for a champ ever — Epstein’s machine did eventually produce younger, more homegrown champions in 2007 and 2013. Epstein left Boston in 2011, but his fingerprints were all over the roster that brought Boston its ’13 title. And in 2016, 43 percent of the Cubs’ WAR was generated by players who made their MLB debuts in a Chicago uniform, many of whom Epstein drafted himself. When Theo Epstein left the Boston Red Sox to become president of baseball operations for the Chicago Cubs in the fall of 2011, he told reporters he was “ready for the next big challenge.” And what a challenge it was: The Cubs were coming off of a 71-win season, without much help on the way. Famously, the team’s last pennant had come 66 years prior, and it hadn’t won a World Series in 103 years.Epstein, of course, was well acquainted with the anguish of a supposedly cursed fan base. In 2004, as general manager of the Red Sox, he’d been the architect of Boston’s first world championship in 86 years. The parallels to Chicago’s plight were obvious. But the prospect of a second Epstein miracle seemed too much to realistically expect. The 2004 Red Sox had needed one of the greatest comebacks in professional sports history to end the team’s drought — surely such lightning couldn’t strike twice, could it?It could, and did. On Wednesday night, Epstein’s Cubs did what previously had been reserved for the realm of fantasy, bringing a World Series to Chicago’s North Side for the first time in 108 years. So, having pulled off the feat twice now, how do Epstein’s two curse-breaking teams stack up?First things first: The 2016 Cubs were probably better than the 2004 Red Sox. Although the Cubs had a penchant for doing things the hard way during the playoffs, they also had one of the best couple-dozen regular seasons in MLB history. By wins above replacement (WAR),1All mentions of WAR in this story will refer to an average between the competing versions offered at Baseball-Reference.com and FanGraphs.com. Chicago was the seventh-best World Series winner ever; Boston ranked 41st out of the 112 all-time winners. The Cubs also just edged out the Sox, according to FiveThirtyEight’s Elo team ratings,2Using the more complete version that’s adjusted for the quality of a team’s starting rotation. ranking 29th among World Series winners versus Boston’s 32nd-place finish. (To be fair, by another measure of Elo the 2016 Cubs ranked as the 70th-best team ever, slightly behind the 64th-ranked 2004 Red Sox.)But more interesting than straight rankings is the contrast in how each team was built. The 2004 Red Sox were a veteran team, the fourth-oldest World Series winner in history.3Using an average for the team’s regular-season roster that weights according to how much each player contributed to the team’s overall record as determined by WAR. They had old hitters — 22nd-oldest among historical champs, as weighted by each player’s regular-season plate appearances — and positively ancient pitchers — No. 1 all time, in fact, weighted by regular-season innings pitched. Epstein was handed a team full of vets when he took over as Boston’s general manager after the 2002 season, and he doubled down further by adding the likes of Curt Schilling (age 37 in 2004), Keith Foulke (31), Kevin Millar (32), Bill Mueller (33) and Mike Timlin (38) via trades or free agency.Epstein’s Cubs, on the other hand, were pretty average as far as the age of championship rosters go: They ranked 52nd-youngest out of the World Series’s 112 winners. But they also had an interesting split between the average ages of their lineup and their pitching staff. In keeping with the tradition of the 2004 Red Sox, Epstein once again assembled a pretty old group of pitchers in Chicago — eighth-oldest among all champs (though a full year-and-a-half younger than Boston’s grizzled staff in ‘04). Chicago’s position players, however, ranked 11th-youngest in championship history. The mix between fresh-faced kids such as Kris Bryant (age 24) and Anthony Rizzo (26) on the hitting side and aging pitchers such as Jon Lester (32), Jake Arrieta (30) and John Lackey (37) built the foundation for one of the most interestingly constructed rosters of any champion. Much of that difference came down to defense: Those Red Sox ranked sixth-to-last in baseball by defensive runs saved in 2004, instead typifying the classic mashing-over-fielding profile carried by many of that era’s sabermetric darlings. The defensive-minded Cubs, by contrast, illustrated the evolution of today’s data-driven teams, ranking first in baseball (by a wide margin) in DRS this season.Those kinds of distinctions particularly help put Epstein’s accomplishment in perspective. As one of the first wave of young, Ivy League-educated, statistically savvy general managers, Epstein was able to reverse Boston’s curse by building what was effectively the prototypical early-sabermetric ballclub: patience and power at the plate, and power pitching on the mound. If the ball was ever put in play, you took your chances with the most adequate defense you could cobble together while still propping up your on-base percentage and slugging average. The 2004 Red Sox were one of the first teams to win with that formula, but Epstein’s 2016 champion Cubs show how much the winning equation has changed as sabermetrics has matured. Now, the value of dynamic free-swingers like Javier Baez has been rediscovered, as has the importance of defense. The secret to breaking Chicago’s curse was very different than the one that broke Boston’s hex 12 years earlier.And if Epstein ever molds another champion elsewhere, it’s a good bet that team will look different than either the ‘04 Sox or the ‘16 Cubs. Another good bet: It will probably set another prototype for subsequent teams to follow, whether they’re trying to end a championship drought or not.
In the team’s final meet leading up to the Big Ten Outdoor Championships, some members of the Ohio State men’s track team participated in the Campbell/Wright Invitational Friday and Saturday at the University of Akron. Among the athletes competing for OSU, sophomore Demoye Bogle was the lone winner, finishing first in the 400-meter hurdles in a time of 52.06 seconds. Twin brothers Jeff and Brian Hannaford, OSU athletes who are redshirting their freshman season but competed unattached, took the top two places in the 3,000-meter run with times of 8:37.15 and 8:37.31, respectively. Freshman Devin Smith, who is also a wide receiver on the football team, competed in his first meet of the outdoor season. He finished sixth in the 100-meter dash with a time of 10.86 seconds and fourth in the high jump with a height of 2 meters. Two OSU throwers had runner-up finishes. Redshirt senior Tyler Branch finished second in the shot put with a throw of 17.73 meters, while redshirt senior Matt DeChant finished second in the discus with a throw of 52.27 meters. “This meet was exactly what some of our guys needed to get them ready for next weekend,” interim coach Ed Beathea said in a press release. “I like where we are at and I feel like we are ready for a tough test at the Big Ten Championships at Madison.” The OSU men’s and women’s track teams are competing in the Big Ten Championships in Madison, Wis. on May 11-13.
A water line break at Bill Davis Stadium during the extreme cold temperatures Jan. 6 and 7 has displaced Ohio State baseball coaches and complicated batting practice for players.Administration and Planning spokeswoman Lindsay Komlanc said in an email the Department of Athletics has been working with university contractor BELFOR Property Restoration who “assists with restoration effects involving water damage, among other things.”“This work is ongoing, so there is not a cost estimate at this time,” Komlanc said. “Our crews first response is always to immediately isolate and shut off the water and the next priority is repairing the space so it can return to normal use as quickly as possible.”The leak occurred in the ceiling of the second floor of the baseball facilities, flooding the baseball coaches’ office, Komlanc said.OSU athletics spokesman Brett Rybak said in an email the second floor holds the offices of OSU coach Greg Beals, two assistants, an office for a volunteer assistant and the director of operations and a front desk for a receptionist.“The coaches have been working out of our video room behind our home dugout the last two weeks,” Rybak said.Redshirt-freshman pitcher Joe Stoll said the water from the offices leaked through the ceiling and into the players’ batting cages on the first floor.“The whole side of the building was covered in ice,” Stoll said. “Every single paper in their office was unusable.”Rybak said there is not an estimated date for the offices to be reopened.Stoll said dehumidifiers have been set up along the players’ batting cages to help dry up the water.“We should have the dehumidifiers in the batting cages until our first trip on Feb. 14,” Stoll said.Komlanc said repairs mainly involve replacement of drywall, wood trim and carpeting.Multiple calls to BELFOR Property Restoration were not returned. Attempts to obtain photographs of the scene were denied.The OSU baseball team is scheduled to start its season in Port Charlotte, Fla., Feb. 14 against Connecticut as part of the Snowbird Classic.
Co-offensive coordinator and quarterbacks coach Ryan Day speaks to the media on March 21. Credit: Jacob Myers | Assistant Sports EditorOhio State lost long-time college coaches Ed Warinner and Luke Fickell after the 2016-17 season. While some fans cheered the departure of Warinner and wished Fickell well in Cincinnati, the pedigree of OSU coaches got a whole lot more impressive with the addition of Ryan Day and Bill Davis.While Ryan Day will help guide redshirt senior J.T. Barrett through his last year in Columbus, Bill Davis will be tasked with leading the linebacker unit, arguably the pride and joy of the last few Buckeye football teams. Also taking on co-offensive coordinator duties with newly hired Kevin Wilson, Day most likely will be the coach who is most closely observed by fans after the OSU passing game struggled for the second straight season. Still, his time spent as the quarterbacks coach under coach Chip Kelly with both the Philadelphia Eagles and the San Francisco 49ers last season should help.Just three days into practice, Day sees a close similarity between OSU and NFL programs.“Real close,” he said on Tuesday. “First off, because the guys who are running around this field are like NFL players. From the skill guys to the guys up front, the guys here have done an unbelievable job recruiting. So, the talent level here is just like a lot of the NFL teams. And that’s what’s most impressive when you get out here for the first three days.”While Day might feel at home, he still faces a tough task in morphing Barrett back into the passer he was during his redshirt freshman season, when he threw for 34 touchdowns and completed 64.7 percent of his passes. Day sees the potential in Barrett to return to the form that pushed him into Heisman consideration, and is using his NFL experience to help the Scarlet and Gray signal caller.“I think that I was lucky enough to coach those guys for the last couple years in the NFL and focus on quarterback play and fundamentals,” Day said. “I really impart that to him every day and just kind of relaying some of that information to him. I think he really appreciates that. But he’s also been coached at a high level to this point too, so it’s just really building upon it at this point.”Day was an offensive coordinator in 2013 and 2014 with Boston College. During those seasons, the Eagles averaged 27.7 and 26.2 points per game.Linebackers coach Bill Davis speaks to the media on March 21. Credit: Jacob Myers | Assistant Sports EditorDavis brings in more “next level” coaching experience than Day, most notably as a coach under NFL defensive masterminds Bill Cowher, Dick LeBeau, Wade Phillips, Marvin Lewis and Dom Capers. Entering his 26th season of coaching, Davis has belonged to coaching staffs of nine different NFL teams.Players have been feeling the differences in the way he coaches, and compare it to an authentic, NFL style.“Definitely. The first day, the first meeting, you could tell,” junior linebacker Jerome Baker said. “This has to be a NFL meeting room because, the way he coach(es), his style especially, is geared toward pro athletes. You could tell that he’d been in the NFL for a few years.”Davis has coached numerous notable NFL linebackers, such as D’Qwell Jackson, Connor Barwin and Kevin Greene. Like Day, Davis also sees similarities in the OSU program and the NFL.“As much as it can be,” Davis said. “The difference is the classes that the young men have to go to, we don’t have in the NFL. So the structure of the work is a little bit different. But what separates the Ohio State guys is the total growing of the man. I really am in awe of how coach Meyer and his staff and the system grows a human being, not just a football player. So what we found in the NFL is when Ohio State guys come, their mental toughness, because they go through this system of the grind, of hard, they come in so mentally tough, it’s tough to trip up Ohio State guys. So that’s why you see the young guys succeeding in the NFL. Because the talent level is the same.”There is still plenty of time for things to change, but the overall impression the new coaches have made on the staff has been positive. The impact of Day and Davis will be on display on April 15 in Ohio Stadium during the spring game, with kickoff scheduled for 12:30 p.m.