afhill
afhill
A F Hill
9 posts
social media Strategy fix.
Don't wanna be here? Send us removal request.
afhill · 3 months ago
Text
Managing and manipulating factors in R
Factors are a unique and essential data structure in R, designed to handle categorical data. They are particularly useful when working with qualitative variables, such as gender, color, or education level, where data points belong to a finite set of categories or levels.
A factor in R is a data structure used to represent categorical data. Factors store both the values of the categorical variable and the corresponding levels (i.e., the unique categories). Unlike character vectors, factors are treated as categorical data, which allows R to handle them more efficiently in statistical models.
Why Use Factors?
Efficient Memory Usage: Factors are stored as integers under the hood, with each level assigned a unique integer value. This can be more memory-efficient than using character vectors for large datasets.
Order and Levels: Factors allow you to explicitly define and manage the order of levels, which is useful for ordered categorical data (e.g., "low," "medium," "high").
Compatibility with Models: Many statistical functions in R, such as linear models, treat factors differently from numeric or character data, often as categorical predictors.
Creating Factors
You can create a factor from a vector of categorical data using the factor() function.# Creating a factor from a character vector gender <- c("male", "female", "female", "male", "female") gender_factor <- factor(gender) # Display the factor print(gender_factor)
This will output:[1] male female female male female Levels: female male
In this example, the gender_factor has two levels: "female" and "male."
Specifying Levels:
You can specify the order of levels explicitly using the levels argument in the factor() function.# Specifying levels and order education <- c("highschool", "bachelor", "master", "phd", "bachelor") education_factor <- factor(education, levels = c("highschool", "bachelor", "master", "phd")) # Display the factor print(education_factor)
This will output:[1] highschool bachelor master phd bachelor Levels: highschool bachelor master phd
Manipulating Factors
Changing Levels:
You can rename or reorder the levels of a factor using the levels() function.# Renaming levels levels(gender_factor) <- c("Female", "Male") # Display the renamed factor print(gender_factor)
Adding or Dropping Levels:
You can add or drop levels from a factor using the factor() function and the levels argument.# Adding a new level education_factor <- factor(education_factor, levels = c("highschool", "bachelor", "master", "phd", "postdoc")) # Dropping unused levels education_factor <- droplevels(education_factor)
Reordering Levels:
The relevel() function allows you to reorder the levels of a factor, which is particularly useful when the first level has a special significance (e.g., setting a reference category in statistical models).# Reordering levels, making "phd" the first level education_factor <- relevel(education_factor, ref = "phd")
Using Factors in Data Analysis
Factors are particularly useful in data analysis, especially when dealing with categorical variables in statistical models.
Summary Statistics:
You can quickly obtain summary statistics of factors using the summary() function.# Summary of the gender factor summary(gender_factor)
This will show the count of each level:Female Male 3 2
Using Factors in Models:
When using factors in statistical models, R automatically treats them as categorical variables. For example, in a linear model:# Linear model using a factor model <- lm(salary ~ education_factor + gender_factor, data = dataset)
R will treat education_factor and gender_factor as categorical predictors, and will handle them appropriately in the analysis.
Best Practices for Managing Factors
Explicitly Define Levels: Always define levels explicitly, especially when the order matters (e.g., ordinal variables).
Use Factors for Categorical Data: Ensure that you use factors for categorical data rather than character vectors to take advantage of R's built-in handling of factors in statistical analysis.
Check Factor Levels: Regularly check and update factor levels as needed, especially after subsetting or combining datasets, to avoid issues with unused or unexpected levels.
Read the full piece at Strategic leap
0 notes
afhill · 6 years ago
Text
Social media Situational Analysis
Tumblr media
Typically the situational analysis in social media strategy template 2020 will help you coordinate all the data you have obtained so far, setting up the overall photograph of what is going on for a manufacturer, organization, or person about social media. This section allows typically the social media strategist to combine typically the findings and takeaways found out during the external scanning, preliminary research, and communication audit stages of development. We combine all of the information, files, and insights we have obtained into organized columns of knowledge. This thorough approach takes on a significant role in deciding the steps that need to be taken just for this strategic plan. Research is merely as good as your ability to discover what is happening, why things are going on, and how you can apply studies to construct sound strategies. Organising your research efforts can help you start to see the whole picture at once.
Central Problem or Opportunity.
Almost all strategic plans outline 13, 000 main problem that needs to be addressed in the campaign. In the case of social media, this kind of single problem might be seeking to determine how social media could help street address an ongoing issue for an corporation. Challenges might take the form of reestablishing trust within a particular group after a crisis (e. r., Uber and #DeleteUber as well as PewDiePie and Disney’s breakup) or tying it to a larger case (e. r., Chipotle and the E. coli food scare). Often neglected is the need for social media options to address new opportunities instead of9124 solve existing problems. A new social media strategic plan could include ways to use web 2 . 0 to take advantage of positive relationships along with a community or increase gross sales and exposure.
SWOT
Some main categories are involved in a new SWOT analysis: strengths, weak spots, opportunities, and threats. SWOT analyses are a traditional the main communication and marketing campaign practice, but essential to the web 2 . 0 strategic plan as well. A new SWOT analysis can be used to investigate and identify solutions to complications, take advantage of new opportunities in addition to ventures, decide which steps to use to help rejuvenate a community as well as brand online, or think about new ways of engaging on the net through social media. The some primary aspects of SWOT, are necessary, but we have to form links between these components in a very fifth, strategic implications component to the plan.
Strengths:
This is where you actually outline in detail the strong points observed in current social media techniques within an organization or for just a person. For example , strengths will usually tend to take the form of an established presence with certain platforms, systematic practicing employees so they stay on top with the latest trends, and a collaborative environment throughout which web 2 . 0 is not only used but soaked up.
Strengths can be divided into different types to make them easier to hold organized. For example , you could set internal resources by persons, financial, or creative. Evaluate the overall standing of the organization’s culture. A positive corporate as well as agency culture can lead to an atmosphere in which employees feel comfortable spreading ideas or taking command roles. Culture is a power in some cases, but may be categorized as a weakness if it retains the organization back.
Each power must be supported by the data you might have collected, which come from your social networking communication audit. Construct the rationale, or a summary of every strength relative to the organization’s overall mission.
Weaknesses:
Weak points hinder or challenge the actual organization’s ability to accomplish the objectives, and are often the opposite of the items mentioned in the last section on strengths. Like one challenge could be a aggressive leadership environment where social networking is not valued. In this kind of situation, employees would think it is very difficult to be creative within their use of social media. Another some weakness could be a lack of mentorship and academic training allowing employees to perfectly keep up with the changes in social media. As is the case with strengths, you have to provide evidence to support your own points regarding weaknesses. The actual communication audit might be adequate for these purposes, but it may also be helpful to use concentrate groups or your own findings of the organization’s process. Stats can also support your evaluation.
Opportunities:
An opportunity is a group of ideas or circumstances exterior to the client that can result in new approaches and actions. This part of the analysis ought to include a list of creative ideas to promote idea and to jump-start new endeavours. These ideas can take various forms. Examples include evaluating brand new social media strategy campaigns as well as trends, experimenting with new systems or tools, reaching brand new audiences that have not already been explored yet, and looking for first time communities with which to engage. Possibilities combine the strategic ideas gathered during research and also the creative execution of content material and stories. These insights, such as the previous sections, should be maintained data, research, and findings collected during the social media conversation audit and background research procedures.
Threats:
Threats are an additional classic external factor, as a result of negative events affecting people or organizations. This portion of your analysis should discover ongoing political, regulatory, environment, and technology-driven threats. Rivals could be listed in this category if they happen to be threatening your well-being or even taking away aspects that you have constructed on social media. For example , Myspace has copied several of Snapchat’s features and incorporated theminto Instagram Stories. If you had been doing a threat analysis with regard to Snapchat, this issue would definitely become included.
Strategic Implications:
The actual strategic implications section is really a fifth component of the SWOT analysis that looks at the actual “so what” factor, or even why the information in the SWOT analysis is important to consider as well as which driving factors ought to be taken into consideration as the client goes forward. This section should be a maximum of one or two sentences long, as well as precisely synthesize the information collected throughout the process of the background investigation and social media communication review into a bold, clear declaration summarizing the findings, what you can do about it, and why it is important. This task is not always presented within other communication- or marketing-related disciplines, and including these details can differentiate you within a positive way from a number of other aspiring social media professionals.
0 notes
afhill · 6 years ago
Text
Knowledge and Technology Alignment
Tumblr media
Since its establishment, being competitive in the industry has always been a main goal. With more demanding clients, the company is always searching for the best tools that allow for process optimization and improvement in offering lasting relations with the customers. As technologies keep evolving, they require investment not only in the facilities, but most importantly in knowledge know-how, emphasizing the need for employees to know the details of the business operations.
The story of the knowledge and technology alignment dated back to the early business set up. Back then, customers’ acceptance of online banking for product purchase and payment was low. Similarly, the online channel of communication was limited. However, the company had and has a strong belief in what and how information technology could be used as an advantage. Therefore, the company ensures each single member of the employees is tech-savvy.
A specific example regards the payment system. Understanding that users have different preferences, the company adopted a mixed approach to payment and bill settlement, and the system is changed to align with new trends. They have had experience of using iPay88, an online payment service for South East Asia countries for credit card payment but not PayPal as the acceptance on the latter was very low. Then when the acceptance of online and mobile banking became high, they created another option for customers. For customers from neighbouring countries, the company provides a payment alternative with Money Express. As a result, the technology know-how enables a sustainable business.
Ensuring Tight Security and Confidentiality of Customer Data
In the business, customers’ information is very important. Data security and confidentiality should not be compromised. When a customer makes an order, the information will be digitally recorded and protected by an authorized personal login. Similarly, for the online purchase, a customer needs to create his personalized account. His shopping cart is only visible to authorized staff. To ensure everything is secured, the company is highly selective in choosing trusted employees to manage the system. A number of layers of database security are implemented. Similarly, data synchronization is a top priority. When an online customer would like check her order physically or to conduct another purchase at the showroom, it can be done without any hassle. The new information will be updated. Similarly, when the offline customer would like to shop online, it can be handled seamlessly too. The ability to provide a highly secured system for omnichannel transactions enables the company to create customer confidence and loyalty, which becomes a factor of business sustainability.
Issues and Challenges
Although the company is able to synchronize across channels, there are two main issues that challenge business growth. One is the facilities and infrastructure support from government and the telecommunication provider. Cross-channel integration requires stable Internet and data connections. Therefore, the government and telecommunication providers must ensure connectivity is supported at all times. Second is the issue of intellectual property (IP). As there are plenty of card designs in various sizes, colors, themes, and purposes, it is very challenging to protect the designs. If they would like to register for IP, it should be done for each individual card and product. Therefore, the company is highly exposed to the danger of design imitation by competitors.
Technology and innovation has changed many aspects of human lives. Achieving omnichannel success requires a clear understanding of its purpose. Providing services to customers require attention to the details of both online and offline conduct. The Internet, mobile devices, and social media have revolutionized the customer experience by allowing customers to shop from anywhere at anytime. As such, challenges are placed on companies to look for strategies that allow them to integrate the different channels seamlessly.
In an omnichannel environment, customer engagement is crucial. Similarly, it is very important to create a trusted environment for customer loyalty. More importantly, it requires a harmonious integration of people, technology, and organization for business sustainability.
In this post, we represent a case that highlights the journey of a card design and printing company in their omnichannel adoption. We show ‘how’ the omnichannel strategy is implemented by focusing on the information delivery and product fulfilment requirements. Our study offers insights on such issues. Research findings suggest, in the omnichannel environment, companies need to offer services in different ways according to each distribution channel. However, across channels, people, technology, and organization must blend together for seamless integration.
These result in the ability of the company to align different channels and become adaptive to change, while at the same time able to balance the harmonization of demanding customers. Good customer relationship practice creates chain effects where this becomes a source of word-of-mouth to the customers’ family, circle of friends, and beyond. In particular, 24/7 online communication, being patient with customers, ensuring good manners, and the ability to handle different customers in different channels yields a fruitful outcome for the company.
The theoretical contribution of this case highlights the importance of linking information delivery and product fulfilment requirements in each channel. Our framework suggests that the roles of customer management, knowledge and technology alignment, customer trust, and data security and confidentiality all enhance the omnichannel experience. The framework serves as a basis for exploring more options in closer business-customer relationships via various technologies. Additionally, it could assist other businesses to understand the key issues in their plans to implement omnichannel leadership development.
However, despite the fact that the framework was developed based on an in-depth case study investigation, we did not assess the views of customers on the omnichannel experience. Thus, it offers a lot of potential for future research. Building on a qualitative framework, we suggest further studies could be designed to provide measurement for each concept within the framework and validation through quantitative studies. By exploring the issue in both qualitative and quantitative approaches, a more holistic understanding could be achieved on providing an effective strategy for omnichannel implementation.
0 notes
afhill · 6 years ago
Text
What Is Bitcoin Mining?
Tumblr media
Bitcoin mining stands for the processing of transactions, in which the bitcoin transactions are verified so as to get added to the blockchain. The processing creates a new block that gets back-linked with the recent block in the blockchain. The blocks must get validated by a proof-of-work. Bitcoin uses HashCash toward this. Upon obtaining a block, this is broadcast to the network.
This gets verified by other miners for consensus.
The participants to perform bitcoin mining are called miners. The miners get incentive to do mining. There are two parts to mining: verifying or compiling recent transactions and solving a mathematical problem. Whoever that is, a miner solves these first and gets the incentive in the form of bitcoins, and their block gets added to the blockchain. The miner incentive has two parts: transaction fees of the transactions contained in the block and newly released bitcoin. These newly released bitcoin with the successfully solved block is called a block reward.
Considering the total bitcoin supply of 21 million, a number that was set by bitcoin creator Satoshi Nakamoto, the block reward is set to get halved every 210,000 blocks as per the schedule. It is interesting to note the block reward of 50 bitcoins in 2009. As of 2018, this number has decreased to 12.5 that will further continue to decrease with more mined blocks. At the rate of bitcoin block time of 10 minutes per block, it takes about four years to have 210,000 blocks. This means that one may expect the block reward to get halved every four years. Even if it is 12.5 bitcoin for every block today, still it is a huge amount. The block reward will become zero when all the bitcoins are mined. At that time, the block reward would have halved 64 times from the creation till the end of the bitcoins. With time, mining is getting more and more difficult. With the increase in value of bitcoin over time, the value of bitcoins earned today, even if it is lesser in count, still have much higher purchase value than what it used to be. A bitcoin block is 1 MB in size.
How to Do It?
One does not need any license to do bitcoin mining. In the beginning, that is, 2009, miners used their laptops, but now those days are gone. The hardware costs are too high today to reap a reasonable benefit.
There are two methods to do bitcoin mining. Either do independently or join the mining pool. Regardless, the first step is to have a bitcoin wallet. As discussed in the earlier chapter, a hardware or paper wallet is the most secured one, compared to online or cloud-based wallets.
If one has chosen to do independent mining, it is very difficult to do on the personal laptop. Even if one gains some bitcoins, the benefits are much lower than the expenses on the electricity. Not to forget the wear and tear, hence decreased life of the hardware on personal laptop or desktop. Considering these setbacks, still if one is determined to go ahead independently, then one has to look for an application-specific integrated circuit (ASIC) miner. The ASIC miner is selected based on factors such as hashing power, efficiency, and price. It is recommended to buy an ASIC miner first-hand to avoid the high probability of burnout. In that case, a second-hand ASIC miner may burnout faster, that is, not last long enough for profits. Standard laptops for home use are not recommended to be used for mining due to high use of electricity and risk of burning the hardware.
Hashing power or hash rate is the unit of processing power of the bitcoin network. Hashing power is the power a computer or hardware uses to run and solve different hashing algorithms. The bitcoin network makes use of intensive operations related to cryptography. A hash rate of 10 Th/s means 10 trillion calculations per second. More the hashing power, more expensive is the hardware.
Efficiency is another factor of importance, as it may outweigh the benefits of bitcoin mining. A usable miner costs thousands of dollars. Electricity is the additional expense to run the hardware. Power supply adds to it. There are online calculators for bitcoin mining profit, where one inputs hash rate, bitcoin price, power consumption, and cost per power consumption unit to calculate an estimated profit per day, month, or year.
Another way of bitcoin mining is by joining a mining pool. Cloud mining allows the miners to rent hashing power. So, the first step is to choose a cloud mining service provider. Cryptocompare.com maintains a list of such service providers. The second step is to select a cloud mining package. The considerations for a package are its price and the expected return from it. One must remember that bitcoin price is very volatile; therefore, any promises or calculations based on the higher price of bitcoin may be very misleading. Generally, these mining companies require the miner to join a mining pool. The benefit of joining a mining pool is that it increases the chances of earning bitcoins. In turn, the pool charges a small percentage of the earnings.
Contributed by thought leadership blog
0 notes
afhill · 7 years ago
Text
Squeezing Down a Sigmoid Neuron
Tumblr media
The perceptron I used to make my restaurant decision has a binary input of one or negative one — a thumbs up or thumbs down. All inputs and all outputs are thumbs up or thumbs down, nothing in between. That perceptron is likely to err in two ways; it could steer me clear of a lot of good restaurants, and it could have me eating at some really bad restaurants.
What I’d really like is a sliding scale that gives me some indication of the relative quality of the restaurant — ideally a number between one and zero. The closer I am to one the more confident the network is in the quality of the restaurant. If it’s close to zero then I should probably go somewhere else. For this purpose, I might try to use something different — a sigmoid neuron.
A sigmoid neuron can handle more variation in values than the binary choice you get with a perceptron. In fact you can put any number you want in a sigmoid neuron and then use a sigmoid function to squeeze that number into something between zero and one. It’s called sigmoid function because the function’s output forms an S-shaped curve when plotted on a graph. This makes a lot of sense because an S is almost like a line that’s been squeezed into a smaller space. That’s exactly Like the perceptron we looked at in the previous section, sigmoid neurons use weighted inputs. The key difference is that the sigmoid function provides infinitely more variation in values than you get with a simple binary zero or one.
So let’s return to our taco neural network. We’ll use the same criteria for the input layer:
x1 is whether or not the restaurant is clean.
x2 is whether or not there is a Spanish version of the menu.
x3 is whether or not there is a sombrero on the wall.
In our perceptron, we could use only a one or a negative one for each of these input values. In our sigmoid neuron, we can use any number between one and zero. Maybe the restaurant is 0.5 clean. Maybe the menu contains 0.3 of Spanish. And perhaps there’s a sombrero on the restaurant’s sign but not on the wall, which rates a score of 0.2.
You can use the same weights you used for the perceptron in the previous section. Cleanliness gets a weight of 3, Spanish menu gets 6, and sombrero gets 2.
Now multiply each input value by its weight and total the results:
(0.5 x 3) + (0.3 x 6) + (0.2 x 2) = (sigmoid squeeze (3.7))
This output is a more precise approximation the restaurant’s quality, because the inputs provide a more precise evaluation of each factor. As with a perceptron, a sigmoid neuron can learn by checking the accuracy of its predictions against actual outcomes and then adjusting the input weights accordingly.
Adding Bias
Setting a certain threshold enables perceptrons and sigmoid neurons to behave in a way similar to biological neurons, which can either fire or not fire. In binary notation, this can be represented as 0 for not fire and 1 for fire. A perceptron fires only if the result of its function meets the specified threshold.
If you find that a perceptron isn’t firing when it should, you can add bias to its function to move the threshold. Bias simply moves the line that defines the threshold without changing that line’s shape or orientation. Bias is just another number that works with input values and weights to encourage neurons to fire or remain silent.
So let’s say that we’re using our taco neural network and you find that it’s far too conservative in its recommendations on whether or not to eat at a certain restaurant. You’re missing too many good meals because your network is saying the restaurants are not clean enough or don’t have enough Spanish on the menu. So you decide to add a bias to the connection between the input and the function. For example, if you add a bias of +5, that value is added to the sum of the inputs before being passed to the function for processing.
Social media strategy bias can also be negative. For example, if the neuron is recommending too many restaurants where you would never choose to eat, you can add a negative bias to move the threshold in the opposite direction.
Earlier, I presented an example based on an experience I had on the archery range at my son’s camp. When I shot some arrows, they landed in a cluster at the upper right of the target, but far from the bull’s eye. I explained that this was an example of low variance and high bias. To make my aim more accurate, I would need to add a negative bias, so my arrows would land lower and more to the left.
The key point to remember is that bias gives you another dial to turn to fine-tune outputs. It’s another tool that artificial neural networks use to learn.
0 notes
afhill · 7 years ago
Text
brand analysis tools
The life cycle of the brand, because, despite its apparent simplicity, remains a vital tool for strategic management. Brand identity because it is a unifying principle of all activities of the company, allowing a semiotic approach to the sense of creating mechanisms, key factors in the consumer world.
Tumblr media
the brand's life cycle
Brands are a part of our most intimate story. Their story is not a pretty one and the same company that brought them into existence. They have an independent life and, having become a part of our imagination, sometimes survive there long after the company disappeared.
The story of a brand comprising the steps of strong expansion alternating with phases of relative stagnation, and perhaps rapid decline. This situation is not very different from what is called the life cycle of a product or a company. The life of a brand can be represented on a graph, with time mapped against an estimate of the strength of the digital advertising goals on the basis of a given agreement (here we chose turnover).
The chart follows the same steps-launch, growth, maturity, decline, revival and disappearance that characterize the life cycle of a product. At each of these stages, the problems addressed by the director of the luxury brand are placed in specific terms.
Reviving able to create a new life for the brand, even if it raises are rarely as successful as this one. The first jump, in 1995-96, illustrates what we are referring more later in this chapter as the leap and involves a drastic repositioning of the brand, its identity, and, in this case, its target consumer. And 'how to give the brand a new lease of life through new values ​​(which are, of course, compatible with the previous values).
Another luxury brand that has undergone a major revival is Burberry. The brand is clearly in the process of the new life cycle growth, as a result of a jump in 2000-01, when its operations in Spain went from license fees to the full revenue. It 's been through the recent crisis without great difficulty and showed a strong performance in 2011.
Other brands in the luxury industries are in different stages of their respective life cycles. Bulgari, for example, has had a strong growth phase from 1993 to 2001. Then, like many others, suffered the effects of 9/11. He had been on a decline since 2007, with a return to previous levels in 2010. Hopefully, the acquisition in 2011 by the LVMH Group will be an opportunity for a new growth strategy.
Herme`s, a brand that is over a century old, has had a relatively constant growth since the early 1990s seems little affected by various economic crises, as if the luxury real attraction was continuing its evolution independently from the economic crisis.
As with Bulgari, there was a decline in 2003, but growth has since returned to the rhythm of the 1990s, reaching an extraordinary performance in 2010 with an annual growth of 25.4 percent. The aristocratic French brand is far from maturity.
Ferragamo, however, seemed to be deep in its maturity stage, until it finally showed excellent results in 2010 with a growth of 24 percent, as shown in Figure 6.6. The discontinuity of 2000 corresponds to the consolidation of sales that followed the acquisition of its subsidiary Japanese retail. Until 2009, the curve for the comparative annual business volume is in fact very close to the theoretical profile of the life cycle.
The birth of a brand
How are brands? We are talking about strong brands, those destined to leave their mark. One thing is certain: Fame can not be guaranteed. This is as true for brands as it is of individuals. Some measures and resources can help their ascent, but success is never guaranteed.
Analysis after the fact inevitably reveals that a strong brand has its origins in an ambitious project supported by the faith of a talented individual. This will often be the founder of the company, whose confidence in the underlying vision and the ability to make it a reality are key advantages. Boldness, vision and determination are indispensable qualities.
Innovation is the second essential factor. Creative genius is to read the mood of the times and offering products that respond to it in new ways, both in terms of style, technology, or identification of a new need.
In the stylistic innovation category, we find all the big names in haute couture and accessories: Coco Chanel, Christian Dior, Yves Saint Laurent, Salvatore Ferragamo, Giorgio Armani, and so on. These creators were able to express new ideas that caught the interest of a sufficient number of people to justify the initiation of an economic activity durable.
In technological innovation, there are all the great pioneers of the automotive industry: Ford, Ransom Olds, Bugatti, Panhard and Renault. And, of course, the likes of Thomas Edison, William Hewlett, Dave Packard, Bill Gates, Steve Jobs and even belong on this list, as well as Walt Disney.
The innovation of which we speak is rarely synonymous invention since it is integrated in the distribution conditions of the product or its production on a mass scale, for example. Often it operates through appropriation or extrapolation of already developed theoretical techniques, which gives concrete industrial reality. It 's true, for example, that Bill Gates was not a software inventor, but it is also true that this visionary entrepreneur understood before most people, the potential of the microcomputer, and has been able to turn to his advantage .
As a result, there are many dimensions of innovation. They can be in the development of a specific production tool that makes the mass production of a new product as possible. Innovation may also consist in revolutionizing the production or distribution of an existing product, or the way a company or its associated services are organized and conducted. Apart from the fact that the mesh companies in white and dyed yarn to meet the demand, Benetton was born from an innovative distribution system. The Zara's success came from logistics and an exceptional ability to read the market needs. These activities make it possible for the brand to provide products to the places where they are required within 10 days. Ray Kroc founded McDonald's in 1955 and invented fast food. Prada began making a name for itself through the use of nylon for the manufacture of bags.
The communication has also become a major innovative dimension. Take, for example, Sony invading the streets of the United States, encoding the walls for the campaign of its latest console PSP or, more generally, any interactive communication that the Internet makes possible, or lingerie brand Lise Charmelle, which suddenly it became famous in Spain in 2002 as a result of a poster campaign that caught people's attention. In the fashion industry, celebrities such as Louis Vuitton, Carl Franz Bally, Enrique Loewe, and Guccio Gucci were not creators in stylistic or technological sense, but the artisans who developed their industrial and commercial vision since the mid-nineteenth century.
We believe that any business has in it the seed of a brand that will develop if conditions are favorable. How many brands started with the activities of small craftsmen or traders? Many brands that are not international, and have low name recognition continue to thrive.
0 notes
afhill · 7 years ago
Text
Social Media presence of some of the most ethical companies
Many companies start with the best intentions and work very hard to produce goods and services that truly meet the needs of those they serve to provide the communities in which we operate and the environment, which provides the raw material. Many of these companies follow the concept of the ""triple"" and the inclusion of social and environmental imperatives for the purposes of existing economic profit. For these organizations, corporate social responsibility is not only fashionable, but the attitude that is rooted culture of the organization. And these companies efforts for the continued involvement of all stakeholders through ""approach and emphasize the discovery"" to communicate and interact with their stakeholders.32 in this approach, organizations working with each of the interested parties (sometimes separately, sometimes together) in an open dialogue and conversation, by any means available technologies to identify and understand the problems that the concerns of stakeholders and find ways to solve or at least mitigate the process of some or all of these problems. These companies often receive awards in the letters as the Ethisphere, most ethical companies in the lists around the world.
Tumblr media
The presence and appearance of the list, such as to improve the reputation of the company, talk to their transparency and credibility, and thus strengthen the company's identity. Get on these lists and stay on them, incentives for companies to improve their various organizational aspects, such as how to do business (ethics) and their impact and serve the communities in which they operate cultures. Since the beginning of the Ethisphere list of twenty three companies are listed for six years, including: Aflac, American Express, Fluor, General Electric, Milliken & Company, Patagonia, Rabobank and Starbucks, among others. These eight companies WME list of twenty companies based on their representation of all sectors in the list and they did it with their presence in social networks, especially elected Facebook and Twitter.
As Facebook pages Aflac, American Express, Fluor Corporation, GE, and the apparent lack of activity on the Facebook page of Fluor Corporation has been observed. In contrast, Fluor is one of the most ethical companies in the world, according to Ethisphere, they, Milliken & Company is primarily organization (B2B) Business-to-business, mainly from other companies, the main recipients of goods and services from companies such as Fluor and Milliken other companies or undertakings, and not the average person. Companies such as Coca Cola, Pepsi, Ford Motor Company, and many retail banks that sell their products and services directly to end users and media companies are classified as business-to-consumer (B2C) B2B .so shareholdings would be different from the company to consumer (B2C) be organizations such as Aflac, American Express (both B2B and B2C), Starbucks, Patagonia, GE (B2B and B2C) and Rabobank (B2B and B2C). It is therefore not surprising that Fluor Milliken and there are not many ""how to"" or ""people talk about them."" However, B2C companies in this list of eight companies, most of them are very active members of their Facebook pages and Twitter to comment, talk to each other and to get feedback and request their respective organizations. This responsibility is very good not only for the organization Talk Talk, but requires them to walk the walk as well.
Note that while ""the number of indicator"" on Facebook or ""number of followers"" on Twitter, there is no definitive indicator of the popularity of the company (often necessary because the profile of the company to start the equivalent Friending) ""number is called society"" and ""the number of tweets"" companies and others, are certainly noteworthy. Although not everyone will be talking positively or positive experiences, certainly means user participation and interest, and sometimes the level of involvement with the organization of the stakeholder groups. People say about the company, and the company is committed to its user groups, opportunities to improve the company's reputation in the eyes of this group of users increases.
Discussion and Conclusions
We believe that there are only a few reasons why many companies are not able to effectively manage your online identity, or social media. The main one is that companies still think that social media is just a vehicle to sell their products or services, and not the platform to deal directly with customers. We believe, even if the board considers this reality will be better served in their approach to harness the power of social media. Second, companies can no longer hide behind their logos, trademarks, and the glory of the past. They are responsible for their customers, and these customers know how to use their voice in these social platforms, and these elements can become a collective roar, especially when they feel or perceive that the organization is for them or not performing business ethics. Companies need to know that their actions can be challenged and are responsible not only to their boards or shareholders, but the audience who consume the products or services of the company. If 58 percent of people would like the company in response to a comment (Tweet) bad experience with a product of the company, the number is quite large and that the company can not ignore.
When the competitiveness is measured in marginal improvements in quality of product or service, the quality of communication used to dealing directly with customers and differentiation becomes ineffective communication can lead to changes in the brand, loss of income, and a negative opinion about the company. The costs do not engage clients actively far outweigh the possible use of personnel costs specifically to interact with customers on Facebook, Twitter, etc., because we believe that the letters business should include the share of direct registration of all platforms of the customer (and not only social platforms), as well as its reputation , values ​​and ethical behavior.
Companies can no longer escape by pulling strategy and take the ""if you build it, they will come"" (Field of Dreams) approach. They can also use any communication strategies that enable the organization to maintain the perception of low levels of responsibility for their actions (non-existence, distance, partnerships and strategies bridge) 0.33 Especially in times of crisis you have to accept a high level of responsibility and adopt a strategy is unacceptable ( full apology, sanitation and penance) or grinding approach in order to better manage their corporate identity. Since such an approach is necessary to change leadership and communication, management organizations must combine ""emphasize the focus and explore"" a ""strategy for equal participation"" build consensus where possible and seek the possibility of maintaining a permanent dialogue with customers. online social media and social networking sites provide organizations and their customers a platform for fair share and it would be reasonable for the organization to take this opportunity to better understand the needs and desires of the customer and meet as many of them, Reputation growing organization, corporate identity is maintained or improved, and leave company's positive image in the minds of customers and the general public.
How Leitch and Davenport indicate that visual identification can be both a moderator and an obstacle in achieving the organization's objectives, aver that the mismanagement of corporate identity on social platforms will limit the organization to search your goals, manage their presence on these social networking sites, and in fact not only align with organizational goals, but the core values ​​of the organization as well.34 additional Nguyen and Leblanc, who study the nature of the relationship between corporate reputation and image of the company and its impact on retention decisions found that the degree of customer loyalty is higher when both the perception and reputation of the company company image are much more favorable 35 Moreover, it was found that the addition of the interaction between the two structures CONT ribuisce explain the lid faith customer ads. Although the study does not include the company's presence on social networks (Facebook and Twitter and the like, which were not yet on the landscape), the results of this study should only alert administrators and managers to rethink their strategies and business communications brand in the new landscape of social media.
Harquail underlines that strategies to this social organization on the Internet put the ultimate goal of authentic communication to achieve organizations.36 Harquail still argue that in order to take advantage of these opportunities, organizations, and professional reputation management need to re-role from a variety of distributed interactions and the creation of individual entities reputation. This means that organizations have the opportunity to interact directly with consumers of its products or services in these social platforms, and do not use this to connect with their customers seriously hamper the organization to maintain its reputation and identity of the company, not only on these platforms, but beyond them . Gilpin, examines the role of the various channels of online communication and image building of social organization, it suggests that structural features and social these social media channels to give them different roles in the process of building image, creating new challenges for the role of public relations in coordinating the management of images between different new media.
The most important contribution of this post is interdisciplinary implications in the field of communication, marketing and management. The position reinforces the importance of open communication and continuous participation through social networking platforms (communication), but also shows the issue of the use of social networks for businesses only for self-promotion and visibility (marketing), which may have a negative impact on negative corporate identity. If the intention of the company is sincere, interested parties may question the transparency, credibility, reputation, because corporate identity, becoming a key problem for the economy.
This message is an attempt to highlight the scan patterns and behaviors of companies dedicated to the use of social networks and their impact on the company's reputation and corporate identity. However, research is limited because it is preliminary, it needs to develop the basic assumptions of the theory is, the identification of key variables and numerous empirical data to help understand the different relational paths, which provide considerable expertise and reputation management corporate identity in social platforms. As Boyd and Ellison suggest ""vast unknown waters have yet to be examined during the study of best digital media strategy and social networking sites (SNS)."" 38 They argue that methodologically, the ability of scientists to make statements causal SNS is limited due to the lack of experimental research or studies longitudinal and recommend large-scale quantitative and qualitative combination of the richest ethnographic research access hardest people (not -consumers) promoting the ability of scientists to help understand the long-term consequences of these instruments. Therefore, future work will involve a number of case studies that focus on how to react to the accepted social networks and small and large organizations to interact with key stakeholders. Scientists can combine these studies with empirical research on the causal relationship between the presence of activity on social networks, media and customer perception of the image, identity and reputation of the organization of social management of corporate identity.
0 notes
afhill · 7 years ago
Text
Wilson Audio Specialties Alexia Series 2 review
One of the benefits of being a reviewer is that, of the large number of products that pass through my listening room, occasionally there are those that I really would like to see take up more permanent residence. One of these was Wilson Audio Specialties’ Alexia loudspeaker, which I reviewed in December 2013.1 “Its clarity, its uncolored, full-range balance, its flexibility in setup and optimization, and most of all its sheer musicality, are, if not unrivaled, rare,” I wrote, and concluded: “If I were to retire tomorrow, the Wilson Alexia would be the speaker I would buy to provide the musical accompaniment to that retirement.” Nothing I subsequently heard disabused me of that dream, though a couple of other speakers, in particular Vivid Audio’s Giya G3 and KEF’s Blade Two,2 joined the Alexia on my bucket list.
Then, in spring 2017, Wilson announced a Series 2 Alexia. On the surface, the new speaker looks identical to the old, with its 8" and 10" paper-cone woofers loaded with a 3"-diameter port on the large cabinet’s rear, and a 7" midrange driver and 1" silk-dome tweeter, each in its own adjustable module atop the woofer enclosure. (See my December 2013 review for a detailed description of the original Alexia.) However, the price has risen from $48,500/pair in 2013 to $57,900/pair for the Series 2, and there are many improvements. The original Alexia was designed by David Wilson working with Vern Credille, Wilson’s lead acoustic and electrical engineer, and mechanical engineer Blake Schmutz; the Series 2 is the result of much development by Dave’s son Daryl, who is now the Utah company’s CEO. In particular, some of the technology developed for Wilson’s limited edition magnum opus, the WAMM Master Chronosonic,3 has found its way into the Alexia Series 2. Because of all this, I felt that a full review would be more appropriate than a Follow-Up.
Last February, Wilson’s Peter McGrath visited to set up the Alexia 2s in my listening room. Such service is not re-ally a reviewer’s perk—when anyone buys a pair of Wilson Audio speakers, the retailer will install them and do the sort of fine-tuning McGrath performed in my room.
The Series 2
I asked Peter McGrath precisely what changes had been made in the Alexia to create the Series 2. “The two bass drivers remain the same, but the port has been moved to the center of the enclosure so that both speakers launch the back wave in exactly the same way,” he explained. “Although the bass enclosure’s footprint is only about 1" different, the increase in the internal volume is significant, at around 11%. Also, while the front baffle of the ‘Series 1’ was vertical, it’s now angled back about 3–4°, and that gives better time alignment between the upper woofer and the midrange driver. The internal bracing of the lowfrequency enclosure is also improved. “The midrange driver is the same in both speakers, but the midrange enclosure has a full 26% increase in internal volume, because of the way we reworked the venting system. The tweeter is now the same Convergent Synergy Mk.5 tweeter we used for the backload of the WAMM Master Chronosonic. The crossover points are very similar, but there have been some modifications, the result of which is that the low impedance dip is nowhere near as severe in the Series 2 as it was in the ‘Series 1.’ The efficiency of the two remains within a dB. “There are a number of other things. Access to the resistors is totally different: you can just pull a plate off and make changes without having to get out the tools. The Aspherical Group Delay time-domain adjustment of the tweeter now has a far greater level of resolution—you can move the tweeter in 1⁄32" increments, twice the number of adjustments as before.
“The spikes and diodes are more substantial than they had been on the first Alexia. And then, on the top plate of the woofer enclosure, the block where all the resonant components of the upper modules couple via the spikes is made out of a material called ‘W Material.’ This is a [mineralimpregnated resin] that we developed for the WAMM. . . . [I]t is far more absorptive of resonant behavior. However, we can’t paint it, which is why it is not colored the way the rest of the speaker is.”
Setup
Peter McGrath followed much the same setup procedure described in my review of the original Alexia.4 Having adjusted the position and tilt of the tweeter and midrange modules for the height of my ears in my listening chair and their distance from the speakers—the exact settings are detailed in the manual’s “Propagation Delay Correction” table—he rolled each speaker back and forth and from side to side on its wheels until he was confident they were close to their optimal positions. Then, using “So Do I,” from singer-songwriter Christy Moore’s This Is the Day (CD, Sony 5032552), and listening to each speaker in turn, he moved the enclosures in 1⁄2" steps in both planes, and adjusted their toe-in until the sound of each Alexia 2 was to his satisfaction. It was time for some critical listening.
Listening
With the Alexia 2s driven by Lamm Industries M1.2 Reference monoblocks, the 1⁄3-octave bass-warble tones on Editor’s Choice (CD, Stereophile STPH016-2) sounded powerful down to the 25Hz band, with the 63, 50, and 40Hz warbles a little higher in level than the bands above them, the 32 and 25Hz warbles exaggerated by the lowest mode in my room, and the 20Hz warble only faintly audible. The half-step–spaced tonebursts on Editor’s Choice spoke cleanly and evenly throughout the bass and midrange regions.
When I listened to the woofer enclosure of an Alexia 2 with a stethoscope, all surfaces were relatively inert. The midrange enclosure, too, was well damped, though on the sidewalls and rear panel I found some low-level modes between 600 and 900Hz, these an octave higher than the modes I’d found on the Alexia “1”—which suggests improved bracing. The dual-mono pink-noise track from Editor’s Choice sounded smooth and evenly balanced, though with some exaggeration of the very lowest frequencies. With the earlier Alexias I’d found that if I moved my head slightly above or below the axis where the sound was best, I became aware of a narrow band of brightness. This didn’t happen with the Series 2s, and it wasn’t until I stood up that the pink noise began to sound colored, acquiring a hollow quality. The central image of the noise signal wasn’t quite as narrow as I hear with top-ranked minimonitors like the BBC LS3/5a or KEF LS50, but it was stable, neither wobbling nor splashing to the sides at any frequency.
You may also be interested in: apple iphone buyers guide
0 notes
afhill · 8 years ago
Text
The Sensemaking Loop
Sensemaking is a perpetual cycle of collecting data, making sense out of it, and sharing knowledge throughout our teams and organizations.
These components come together to help you transition raw customer data into meaningful, “sticky” insights.
You can think of the sensemaking loop as the components you’ll need to make an effective case for your product’s strategy. The hypotheses you create with the HPF will be the backbone of this loop.
Sensemaking is a continuous cycle that we use at every stage of the HPF. Each stage will manifest its own insights.
As you engage in sensemaking, you’ll find that it’s not a linear path, where you move from collecting data to sharing your knowledge. Instead, you’ll move back and forth within the loop — collecting data, identifying patterns, sharing stories, and returning to collect more data.
Put another way, sensemaking isn’t a destination, where you try to reach the end of the loop; it’s a continual process that ensures consistent learning and a fully developed understanding of your customers.
Let’s look at each component of the loop more closely.
Data Sources
We’ve talked about the various methods you can employ to validate your hypotheses. While most of the types of experiments we’ve discussed have talking to customers at their heart, your customer and product development strategy should pull from multiple data sources.
Usage statistics, discussion forums, support tickets, and customer relationship management (CRM) systems are all great sources for data. You can also forage through market trend analysis reports, run a mobile marketing campaign, conduct a competitive analysis, or go on a customer visit. The important thing here is that while you’re making sense of the data you’re collecting, you’re also tracking the sources of that data.
Shoeboxes
Before the advent of cloud storage and digital photography, the shoebox was the de facto storage method for photos. Shoeboxes were great because they required little to no organization (you could just wrap a rubber band around the photos from your summer vacation), and they kept all your photos secure and in the same place.
As you begin to cull your data sources, you’ll collect notes, articles, and other assets that comprise your area of study. There are many online collaborative “shoeboxes” to store these types of things. Microsoft SharePoint, OneNote, Evernote, Basecamp, Google Drive, and Dropbox are all tools that allow teams to collect the information they’re gathering. You shouldn’t spend too much time curating or organizing your shoebox. This should be a loose inventory of any data you’ve collected — important or unimportant.
Evidence Files
When you’re on a customer or product development journey, you’re running an investigation. Your evidence file is like a case file: it contains the meaningful bits of data that comprise your point of view, vision, or strategy for your product.
An evidence file could include pictures of a customer’s environment, a direct customer quote, or any other type of signal that points to why you validated or invalidated your hypotheses. For example, you could begin to capture direct quotes from your customer interviews that highlight a particular motivation or problem you hypothesized might exist.
These evidence files should be constantly culled, organized, and reflected upon, as they are the foundation for the case you’re trying to make on behalf of your customer. They can help the team stay organized and up-to-date on the latest findings.
As you begin to curate your evidence files, you’ll find yourself adding pieces of evidence that your gut tells you are meaningful but you’re not yet sure why. Your ability to clearly articulate the underlying meaning will evolve as the evidence file takes shape. You’ll find yourself continually moving things back and forth between your shoebox and evidence file until you’ve refined your collection to its most impactful bits of data for the social media packages.
When starting your project, you may find that all the data you’ve collected feels meaningful, and that’s okay. As your project matures, you’ll need a way to reduce your data signals and separate the “wheat from the chaff.” What you believe to be most meaningful will evolve over time, and so should your evidence file.
Schemas
Schematizing data is the process of applying categories and patterns to your data. We often refer to this as “tagging your data,” giving it meaning and defining it.
For example, you may mark a quote with a “problem” tag when a customer expresses a frustration. This will help you look at all your interviews and identify each time a frustration was articulated.
These tags will help you see patterns in your data and begin to draw conclusions.
Again, the parameters within the HPF automatically get you started. If your hypotheses and Discussion Guides are formulated to capture parameters like job-to-be-done, problem, or motivation, it can be much easier to begin tagging your data using those parameters.
We’ve repeatedly seen teams create spreadsheets to connect the data they’ve collected to their hypotheses. Some spreadsheets are quite simple, tracking the status of hypotheses. Others are more elaborate, containing dashboard-like interfaces with counters and formatting that change the status of a hypothesis from green to red based on the number of times it has been validated or invalidated.
We’ve also seen teams create hypotheses backlogs that help them track the various hypotheses the team might be exploring (and remember others they stopped exploring).
Stories
There are two things you need to get others onboard with your vision:
A compelling story
A way to share it
The most important thing about sensemaking is that it helps you share meaning, not data. Data is important, but emotion and empathy are what compel others to action.
As you begin to identify patterns, you’ll need ways to express your data so that people can easily understand it. Visual elements like charts, graphs, and models can be a powerful way to help others understand what you’ve learned.
You can certainly take the quantitative data you’ve collected and transfer it into pie charts or line graphs, but you should look for more illustrative models as well. For example, you may have identified that there is a tension between customers wanting quality service providers and saving money. Perhaps this tension changes, depending on the type of service the customer is looking for. A customer might be looking to save money when researching lawn care service, but willing to pay much more for quality childcare.
Illustrating these nuances in graphical models can help others easily understand relationships and connections in your data.
Analogies and metaphors are also powerful tools to help convey complex ideas to others. Look for opportunities where your findings parallel other situations that might be more familiar and accessible to them. For example, one of our teams identified parallels between people trying to learn a new programming language and people learning to swim. This comparison helped the rest of the organization empathize with how difficult the challenges were without requiring knowledge about the programming language.
Once you have a compelling story, you need a way to share it with others in your organization. You’ll want to create a continuous communication channel (or multiple channels) that’s easy to use and accessible to everyone. We’ve found the less formal and lightweight the communication channel is, the more likely people are to use it. Leverage existing channels like email or chat clients so you don’t have to encourage your organization to use another channel they can easily ignore.
Throughout the stages in the HPF, you should fall into a pattern of continual sensemaking; these activities should be happening, in parallel, with your customer and product development. You can do this by splitting the teams’ efforts or scheduling a day each week, during development, to stop and make sense of the data you’ve been collecting.
Over time, the continual pattern of the sensemaking loop will increase your understanding and the overall empathy of your organization toward its customers.
At this point, we’ve covered the three phases of the Customer-Driven Cadence: Formulating, Experimenting, and Sensemaking, through this social media essay. This cadence happens in each stage of the HPF. You formulate your assumptions into hypotheses, you run experiments to collect data, and you make sense of that data to gain insights.
Now, we’re going to dive into each stage of the HPF and examine the hypotheses and parameters we use to drive us toward better understanding our customers, their problems, and what they find valuable and useful.
0 notes