Skip to main content

Jeremy Gollehon

Jeremy Gollehon's Public Library

  • It doesn’t necessarily have to be the VP of Sales (especially in companies with more than 30 people); a successful senior sales rep would suffice. And to be clear: The product manager still owns the product, and has to own both short-and long-term thinking about that product. But if you’re growing a company, you have to sell something, and now. Having salespeople in the room helps brings that urgency. The union of product management and sales done right produces so much more than they ever did before.

  • Competitors. You are not alone. No one in enterprise IT believes you built the one and only product that does most of what you do. Coming to an enterprise sales engagement with a detailed understanding of competitors shows respect and acknowledgement of reality. There are two types of competitors you need to understand fully. First, you need to be versed in the current marketplace competitors and how you compare to them. Often the best tool to view this is a classic “magic quadrant”—just be forewarned you have to substantiate claims carefully and be prepared for the “fans” of competitors to confront you (and be prepared for your competitors to sell against your characterization). If you’re doing this right, you are not creating new comparison criteria but using incumbent/competitor criteria as a starting point. Second, you need to be versed in how the enterprise is already addressing (or trying to address) the problem space. This is just as much a competitor—in enterprise software the easiest product to buy is the one you’ve already got in place and no one gets fired for doing that. While you might be negative towards your market competitors, it is incredibly important to be respectful of implemented competitors or homegrown solutions even if some in IT might mock their own choices.
  • Replacement. The very last step in the partnership with an enterprise customer is replacing an existing system. I purposely put this last because most every product person thinks that when you have a new product the first line of sales is to explain what the customer can replace or decommission if they buy the new product. Every IT person knows that this is exactly the very last thing you do and that the long tail on usage for any implemented system before actual replacement, no matter how inevitable. This is important to internalize in terms of building a partnership because every running system has a champion or advocate who bought and deployed the system so a poor selling technique is to challenge that person too early. If you play everything correctly, someday you will be the system that keeps running long after it should—that’s something to keep in mind!

  • A natural person who exercises investment discretion over his or her own account is not an institutional investment manager.
  • If it is determined the registered investment advisor meets the reporting requirements of this section, it must file Schedule 13D within 10 days of becoming a 5% beneficial owner. Schedule 13D must then be updated promptly when changes occur.
  • annual basis. Schedule 13G must be filed when a qualified institutional investor exceeds 5% of a class of outstanding registered equity securities provided they hold the securities due to their normal course of business and not to affect change or influence control of the issuer. Schedule 13G is actually combined with Schedule 13D.

  • Rather than going that rule of thumb route, I would urge you to consider getting closer to a single stream flow on individual things that the software needs to do and employ the “Three Amigos” model that George Dinwiddie explains in Better Software, November/December 2011.  Here, we get the BAs, QAs, and developers at the start to write automated tests that serve as the functional requirements for the work to be done.  If we keep the rate of production of these collaboratively-developed tests in line with the actual rate of production that satisfies the tests, we never have to fear that we have something off balance.  If we find, perhaps with a Kanban analysis, that we can’t produce enough tests to keep our developers happy with enough work to do, we may find that we don’t have enough BA types or enough QA types available for us, and can adjust accordingly.


    And, yes, there will always be a place for QA exploratory testing on integrated code.  But that should be automated as well into regression suites that are automated and repeatable.

  • If we are prepared to update the software instantly the potential consequences of bug sliding through testing won't be as big.

  • When people feel like they have no sense of direction, no purpose in their life, it’s because they don’t know what’s important to them, they don’t know what their values are.

  • What Is Master Data Management?


    For purposes of this article, we define Master Data Management (MDM) as the technology, tools, and processes required to create and maintain consistent and accurate lists of master data. There are a couple things worth noting in this definition. One is that MDM is not just a technological problem. In many cases, fundamental changes to business process will be required to maintain clean master data, and some of the most difficult MDM issues are more political than technical. The second thing to note is that MDM includes both creating and maintaining master data. Investing a lot of time, money, and effort in creating a clean, consistent set of master data is a wasted effort unless the solution includes tools and processes to keep the master data clean and consistent as it is updated and expanded.

    • How Do I Create a Master List?


      Whether you buy a tool or decide to roll your own, there are two basic steps to creating master data: clean and standardize the data, and match data from all the sources to consolidate duplicates. Before you can start cleaning and normalizing your data, you must understand the data model for the master data. As part of the modeling process, the contents of each attribute were defined, and a mapping was defined from each source system to the master-data model. This information is used to define the transformations necessary to clean your source data.


      Cleaning the data and transforming it into the master data model is very similar to the Extract, Transform, and Load (ETL) processes used to populate a data warehouse. If you already have ETL tools and transformation defined, it might be easier just to modify these as required for the master data, instead of learning a new tool. Here are some typical data-cleansing functions:  

      • Normalize data formats. Make all the phone numbers look the same, transform addresses (and so on) to a common format.
      • Replace missing values. Insert defaults, look up ZIP codes from the address, look up the Dun & Bradstreet number.
      • Standardize values. Convert all measurements to metric, convert prices to a common currency, change part numbers to an industry standard.
      • Map attributes. Parse the first name and last name out of a contact-name field, move Part# and partno to the PartNumber field.

      Most tools will cleanse the data that they can, and put the rest into an error table for hand processing. Depending on how the matching tool works, the cleansed data will be put into a master table or a series of staging tables. As each source is cleansed, the output should be examined to ensure the cleansing process is working correctly.


      Matching master-data records to eliminate duplicates is both the hardest and most important step in creating master data.

    • How Do I Maintain a Master List?


      There are many different tools and techniques for managing and using master data. We will cover three of the more common scenarios here:  

      • Single-copy approach—In this approach, there is only one master copy of the master data. All additions and changes are made directly to the master data. All applications that use master data are rewritten to use the new data instead of their current data. This approach guarantees consistency of the master data, but in most cases it's not practical. Modifying all your applications to use a new data source with a different schema and different data is, at least, very expensive; if some of your applications are purchased, it might even be impossible.
      • Multiple copies, single maintenance—In this approach, master data is added or changed in the single master copy of the data, but changes are sent out to the source systems in which copies are stored locally. Each application can update the parts of the data that are not part of the master data, but they cannot change or add master data. For example, the inventory system might be able to change quantities and locations of parts, but new parts cannot be added, and the attributes that are included in the product master cannot be changed. This reduces the number of application changes that will be required, but the applications will minimally have to disable functions that add or update master data. Users will have to learn new applications to add or modify master data, and some of the things they normally do will not work anymore.
      • Continuous merge—In this approach, applications are allowed to change their copy of the master data. Changes made to the source data are sent to the master, where they are merged into the master list. The changes to the master are then sent to the source systems and applied to the local copies. This approach requires few changes to the source systems; if necessary, the change propagation can be handled in the database, so no application code is changed. On the surface, this seems like the ideal solution. Application changes are minimized, and no retraining is required. Everybody keeps doing what they are doing, but with higher-quality, more complete data. This approach does have several issues: 
        • Update conflicts are possible and difficult to reconcile. What happens if two of the source systems change a customer's address to different values? There's no way for the MDM system to decide which one to keep, so intervention by the data steward is required; in the meantime, the customer has two different addresses. This must be addressed by creating data-governance rules and standard operating procedures, to ensure that update conflicts are reduced or eliminated.
        • Additions must be remerged. When a customer is added, there is a chance that another system has already added the customer. To deal with this situation, all data additions must go through the matching process again to prevent new duplicates in the master.
        • Maintaining consistent values is more difficult. If the weight of a product is converted from pounds to kilograms and then back to pounds, rounding can change the original weight. This can be disconcerting to a user who enters a value and then sees it change a few seconds later.

      In general, all these things can be planned for and dealt with, making the user's life a little easier, at the expense of a more complicated infrastructure to maintain and more work for the data stewards. This might be an acceptable trade-off, but it's one that should be made consciously.

3 more annotations...

  • I took Modafinil for a month and did not respond with any of the desired affects, the only effect that occurred was loss of appetite (which was nice).


    I was then prescribed Armodafinil (the (R)-enantiomer only version of Modafinil) and responded very well. I need to sleep less, I feel motivated and my concentration span has increased greatly.


    I would definitely recommend to try Armodafinil if someone does not respond well to Modafinil.

  • Hi Lkutter,


    Thank you so much for sharing your story with us, and for being part of this community.


    While you may get more feedback from the community, I thought you might appreciate this article, as you are no longer able to see your neurologist. It’s about the different types of doctors and specialists that work with migraine: I truly hope this proves useful for you.


    Additionally, to answer your question, it’s possible that one type of scan would pick up on something that the other one missed, because they work in different ways. Here is an article about CT scans and what they can detect:, as well as an article on what MRI’s can detect: I hope this helps!


    Warm Regards,


    Jenn (Community Manager,

  • thanks all. I KDZ'd 12b stock rom, rooted, then flashed TWRP through flashify. I then went into recovery and installed the 23c rom. I used skydragon 4.0.0 because I tried to flash Jasmine 7.0 like 8 times. I did not have any of the bottom keys and I could not access my pulldown menu. I could not figure out why this was happening. Skydragon worked for me perfectly. Thanks All.
1 - 20 of 4415 Next › Last »
20 items/page

Diigo is about better ways to research, share and collaborate on information. Learn more »

Join Diigo