#ApacheSolr
Explore tagged Tumblr posts
nitsan · 20 days ago
Text
Apache Solr for TYPO3: Advanced Search Features for Your TYPO3 Website
Have you ever used a search bar on a website only to find results that are slow, confusing, or irrelevant? It can be really frustrating, especially when you're in a hurry to find important information. This problem happens to many TYPO3 websites because the basic built-in search often isn't powerful enough. Fortunately, Apache Solr for TYPO3 provides an excellent solution to make your search faster, smarter, and easier for all users.
Tumblr media
What is Apache Solr?
Apache Solr is a free, open-source search engine platform designed to handle large amounts of data quickly and efficiently. It’s built on Apache Lucene, a robust search technology, and is widely used because of its speed and reliability. Solr indexes your website’s content effectively, allowing visitors to find exactly what they're looking for in seconds.
Main Features of Apache Solr
Apache Solr offers several powerful features to enhance your website's search capability:
Fast Results: Delivers instant search results, even for large websites with thousands of pages.
Faceted Search: Enables users to narrow down search results by using filters such as categories, tags, or dates.
Autocomplete and Spell-Checking: Offers suggestions as users type and corrects minor spelling errors automatically.
Synonym Support: Understands and matches similar words, ensuring accurate results even if users use different terms.
Document Indexing: Allows users to search within documents like PDFs, Word files, and Excel sheets, not just webpage text.
Why Apache Solr is Essential for TYPO3 Websites
The default TYPO3 search is limited and slow, particularly as your site grows. Apache Solr solves this by providing:
Enhanced Speed: Quickly searches large amounts of content without slowing down your site.
Better User Experience: Provides users with clear, relevant results, encouraging them to stay longer on your site.
Customization Options: Easily tailor the search functionality to your specific website needs, ensuring relevant results every time.
Scalability: Handles growth efficiently, making it perfect for websites that continue to add content.
Easy Setup and Useful Extensions
Setting up Apache Solr on your TYPO3 site is straightforward:
Install the Solr Extension: Use the TYPO3 backend or Composer to install the EXT:solr extension.
Configure Your Site: Adjust simple settings using TypoScript and ensure proper indexing by adding markers around your content.
Regularly Update Indexes: Schedule automatic content indexing via TYPO3's scheduler to keep your search updated.
TYPO3 also provides helpful extensions to make your setup easier:
EXT:solr: The core extension linking TYPO3 with Apache Solr, enabling all advanced search functionalities.
solr file_indexer: Helps index file contents so users can search inside documents.
DDEV Apache Solr for TYPO3: A development tool to easily set up and test Solr locally.
Benefits of Implementing Apache Solr
By integrating Apache Solr into your TYPO3 website, you gain:
Improved Site Engagement: Visitors stay longer as they easily find what they need.
Increased Efficiency: Reduces the effort needed to find important information, benefiting both users and administrators.
Professional Search Experience: Gives your website a polished, professional feel, enhancing user trust and satisfaction.
Conclusion
Apache Solr significantly upgrades your TYPO3 site's search capability, providing fast, accurate, and user-friendly results. It's easy to set up, customize, and maintain, making it a valuable tool for improving your website’s performance. Start using Apache Solr today to enhance the search experience for your users and watch your website engagement grow!
0 notes
seo-vasudev · 2 years ago
Text
Tumblr media
At Nextbrick, we are dedicated to helping businesses harness the full potential of Apache Solr, the industry-leading search platform. Our team of Solr experts is committed to providing top-notch support and consulting services that will elevate your search capabilities to new heights.
🔍 Solr Support: Whether you're just starting with Solr or have an existing implementation, our skilled professionals are here to ensure your Solr infrastructure runs seamlessly. From troubleshooting and performance optimization to upgrades and maintenance, we've got you covered.
🤝 Consulting Services: Looking to implement Solr for your organization or optimize your current setup? Our experienced consultants will work closely with you to design tailored solutions that align with your business objectives, ensuring you get the most out of Solr's robust features.
Join hands with Nextbrick to enhance your search experience and gain a competitive edge in today's data-driven world. Let's embark on this Solr journey together! Connect with us today to learn more about how we can assist you.
Visit : https://nextbrick.com/solr-consulting-support-services/
Solr #Search #Consulting #Support #Nextbrick #SearchPlatform #SolrExperts #BusinessSolutions #Solrconsulting #SolrSupport #ConsultingServices #Nextbrick #Apachesolr
0 notes
thinkstraight · 3 years ago
Link
Apache Solr is an open-source, high-performance Java search server for searching data stored in HDFS. It has the ability to improve the search capabilities of websites by enabling full-text searches and real-time indexing. It easily scans data in any format, including tables, texts, and locations. This search engine is built using the a Java library.
3 notes · View notes
phatcovers · 7 years ago
Photo
Tumblr media
Fun times with Apache Solr building a knowledge management portal! #apachesolr #webdeveloper #softwaredevelopment #codeanddesign #codeanddesignlife #codelife #programming #brogrammerslife #coding #ncblogger #afrotechniac #developers #developerslife (at Orange County, North Carolina)
1 note · View note
proponenttechnologies · 4 years ago
Photo
Tumblr media
If you are feeling in need of a cost-effective solution for your business, the best option is Open Source Web Development. Here, Proponent Technologies is one of the best Open Source Web Development companies. We use up-to-date software like PHP, HTML5, Rails, Python, Perl, Magento, Opencart, Zencart, WordPress, Drupal, Joomla, OpenX, Ubuntu, ApacheSolr, Android, etc. 
0 notes
mrhackerco · 5 years ago
Photo
Tumblr media
Critical vulnerability affecting Apache Solr is found | MrHacker.Co #apachesolr #arbitrar #cybersecurity #hacking #remotecodeexecutionrce #hacker #hacking #cybersecurity #hackers #linux #ethicalhacking #programming #security #mrhacker
0 notes
hacknews · 5 years ago
Photo
Tumblr media
Critical vulnerability affecting Apache Solr is found #apachesolr #arbitrar #cybersecurity #hacking #remotecodeexecutionrce #vulnerability #hacking #hacker #cybersecurity #hack #ethicalhacking #hacknews
0 notes
dbi-srl · 8 years ago
Photo
Tumblr media
Apache Solr Memory Tuning for Production. #BigData #DataScience #Analytics #ApacheSolrhttps://t.co/pXHFlBhHk9 pic.twitter.com/zLDGVaasPP
— Jesús Mínguez (@jminguezc) June 9, 2017
0 notes
vinh-tran · 12 years ago
Text
Use Apache Solr to search in files
[From: https://www.acquia.com/blog/use-apache-solr-search-files]
This article shows how I set up Tika and the Apache Solr Attachments module on my MacBook Pro runing Snowleopard (OSX 10.6). There are two ways to run Tika, either as a client-side component (where the client is Drupal), or as a server-side component (the server being Solr). The advantage of running Tika client-side is that the files don't need to travel over the wire to have their texts extracted. Especially in the case of rich media (movies, images, music) this is quite desirable. Why send a 20M video over the network just to get 15-20 lines of text from it? Another important advantage of running Tika client-side is that it works with Acquia Search.
The disadvantages of running Tika client-side are that you have to install it on every client (in a multi-webserver environment, for example), and the processing workload then falls onto your webserver instead of offloading it to the Solr server. Acquia Search also doesn't currently support the option of offloading extraction to the Solr server, though it is a feature we might add.
This article will show you how to install Tika on the client.
What you need
You need java 1.6 (1.5 should work, but not as many document types are supported). Test this by typing java -version at the command line. Here's what I see on my machine:
robert$ java -version java version "1.6.0_17" Java(TM) SE Runtime Environment (build 1.6.0_17-b04-248-10M3025) Java HotSpot(TM) 64-Bit Server VM (build 14.3-b01-101, mixed mode)
You also need the java build tool called Maven. If you're on OSX 10.6 like I am this should already be the case. Check by typing mvn -v at the command line. Here's what I see on my machine:
robert$ mvn -v Apache Maven 2.2.0 (r788681; 2009-06-26 15:04:01+0200) Java version: 1.6.0_17 Java home: /System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Home Default locale: en_US, platform encoding: MacRoman OS name: "mac os x" version: "10.6.2" arch: "x86_64" Family: "mac"
Building Tika
No matter how you decide to run Tika (client or server side), you'll need to get Tika first. A zip file of the source code is available from the download page page on lucene.apache.org. Alternately you can get the source directly from the subversion repository with this command:
svn export -r901084  http://svn.apache.org/repos/asf/lucene/tika/trunk tika-0.6
Either way you'll end up with a directory called tika-0.6. Next we're going to use a tool called Maven to build Tika from the source files. From the command line, change directories into the new tika-0.6 directory. Check to see that you're in the directory containing the pom.xml file. Then type the following two commands. The first gives Maven enough memory, and the second tells it to build Tika:
                                                          export MAVEN_OPTS="-Xmx1024m -Xms512m" mvn install
The first time I tried this I was on the train between Cologne and Brussels using the spotty wireless connection on the Thalys. The build process broke three times and each time I just typed mvn install and it picked up where it had left off, eventually succeeding. Later I tried the build process again in normal conditions and it worked seamlessly.
When Maven is done building, your nugget of gold will be tika-app/target/tika-app-0.6.jar. Let's test it out! Still from the tika-0.6 directory, try extracting some text from a file using the new tika-app-0.6.jar file:
java -jar ./tika-app/target/tika-app-0.6.jar -t [path/to/a/file]
Replace [path/to/a/file] with the path to some interesting file you'd like to test. If everything goes right you'll get the text from that file dumped to stdout (which means you'll see it scrolling by on in your command terminal).
As a final step I moved the tika-app-0.6.jar file to ~/bin (the directory where I keep my custom scripts and libraries) and named it tika.jar. This is optional. You can keep the jar file wherever it makes sense to you. Just take note of its absolute path, as you'll need it when configuring the apachesolr_attachments module.
Now we're ready to use Tika within the context of Drupal and Apache Solr searching.
Drupal, Solr and the Apache Solr Attachments module
For instructions on installing Drupal, Solr, or the Apache Solr module, please refer to the linked resources. You can also get up and running very quickly using the Acquia Drupal Stack Installer and Acquia Search (try it for free).
Note that you may need to give Solr more memory when doing attachment searching. For this example I tested using the Jetty container that comes with the Solr download, but I started it using this command:
java -Xmx1024m -Xms512m -jar start.jar
Get and install the Apache Solr Attachments module in the normal fashion. There is one configuration screen, found at q=admin/settings/apachesolr/attachments. Most of the the options are self-explanatory. You may want to allow a wider set of file extensions. For exact information about what is available, see the Tika supported formats page.
Upload files, run cron, and search
The only thing left to do is to upload some files, run cron, and do a search. The search results that match text in files link to both the file, and to the node to which they belong. Here's and example of me searching for "merlinofchaos" and finding the views-6.x-3.0-alpha2.tar.gz file that I uploaded (yes, Tika can search in tar.gz files, and yes, that's the whole Views 3 module).
Here's an example of me searching for "Drupal" and finding both a Word document and an iWork Keynote file.
0 notes
seo-vasudev · 2 years ago
Text
Tumblr media
At Nextbrick, we are dedicated to helping businesses harness the full potential of Apache Solr, the industry-leading search platform. Our team of Solr experts is committed to providing top-notch support and consulting services that will elevate your search capabilities to new heights.
🔍 Solr Support: Whether you're just starting with Solr or have an existing implementation, our skilled professionals are here to ensure your Solr infrastructure runs seamlessly. From troubleshooting and performance optimization to upgrades and maintenance, we've got you covered.
🤝 Consulting Services: Looking to implement Solr for your organization or optimize your current setup? Our experienced consultants will work closely with you to design tailored solutions that align with your business objectives, ensuring you get the most out of Solr's robust features.
Join hands with Nextbrick to enhance your search experience and gain a competitive edge in today's data-driven world. Let's embark on this Solr journey together! Connect with us today to learn more about how we can assist you.
Visit : https://nextbrick.com/solr-consulting-support-services/
#Solr #Search #Consulting #Support #Nextbrick #SearchPlatform #SolrExperts #BusinessSolutions #Solrconsulting #SolrSupport #ConsultingServices #Nextbrick #Apachesolr
0 notes
wouters-frederik-blog-blog · 14 years ago
Text
Adding multiple images to your apachesolr search results.
It's possible that this is not the most elegant solution, but it works for me.
It adds images with a specific imagefield to the search index, and when the node appears in the apachesolr search results, the images are returned together with the snippets and all.
I divided into these main steps.
1. the coding
2. the theming
1. the coding
First we need to add the images to the index.
function MYMODULE_apachesolr_update_index(&$document, $node) {   if (count($node->field_CUSTOMIMAGEFIELD)) {   //add the cck image field called field_main_image as a separate field to the SOLR search index schema      foreach ($node->field_CUSTOMIMAGEFIELD as $image) {     $document->setMultiValue('sm_field_CUSTOMIMAGEFIELD', $image['filepath']);     }   } }
Modification of the search query to solr so the imagefields are returned as well.
function MYMODULE_apachesolr_modify_query(&$query, &$params, $caller) {   $params['fl'] .= ',sm_field_CUSTOMIMAGEFIELD'; }
When a node gets indexed, the image field should be added to the index.
Hook _apachesolr_update_index does so.
function MYMODULE_apachesolr_update_index(&$document, $node) {   if (count($node->field_CUSTOMIMAGEFIELD)) {   //add the cck image field called field_main_image as a separate field to the SOLR search index schema      foreach ($node->field_CUSTOMIMAGEFIELD as $image) {     $document->setMultiValue('sm_field_CUSTOMIMAGEFIELD', $image['filepath']);     }   }
The index does not know that the image is actually a string. The hook cck_fields_alter below changes this mapping so the filepath will be added as a string to the index. A callback function is also added.
function MYMODULE_apachesolr_cck_fields_alter(&$mappings) {   $mappings['per-field']['field_CUSTOMIMAGEFIELD'] = array(     'callback' => 'MYMODULE_apachesolr_callback',     'index_type' => 'string'   ); }
The function callback referenced above:
/** * A function that gets called during indexing. * @node The current node being indexed * @fieldname The current field being indexed * * @return an array of arrays. Each inner array is a value, and must be * keyed 'value' => $value */ function MYMODULE_apachesolr_callback($node, $fieldname) {   $fields = array();   foreach ($node->$fieldname as $field) {     // In this case we are indexing the filemime type. While this technically     // makes it possible that we could search for nodes based on the mime type     // of their file fields, the real purpose is to have facet blocks during     // searching.     $fields[] = array('value' => $field['filemime']);   }   return $fields; }
2. the theming
search-result.tpl.php add the following:
this is a draft version, you should improve this.
 if($show_galleries){         if (isset($result['fields']['sm_field_photogallery_image'])) {     foreach($result['fields']['sm_field_photogallery_image']['value'] as $image){       if($i>$MAX_AMOUNT-1){break;}$i++;       ?>     <a title="<?php print $title ;?>" href="<?php print $url; ?>">         <?php print theme('imagecache','photogallery_thumb', $image); ?>     </a>     <?php       }
   } }
12 notes · View notes
mrhackerco · 5 years ago
Photo
Tumblr media
Critical vulnerability in Apache Solr; update patches already available | MrHacker.Co #adobe #apachesolr #cybersecurityact #ebay #hacking #hacker #hacking #cybersecurity #hackers #linux #ethicalhacking #programming #security #mrhacker
0 notes
vinh-tran · 12 years ago
Text
Parallel indexing for Apache Solr Search Integration
[From: http://nickveenhof.be/blog/parallel-indexing-apache-solr-search-integration]
Recently I've been involved in drupal.org by upgrading the site to the latest version of Apache Solr Search Integration (from 6.x-1.x to 6.x-3.x, and in the near future to 7.x-1.x). This upgrade path is necessary as we still want to have a unified search capability across Drupal 6 and Drupal 7 sites, for example groups.drupal.org and drupal.org.
If you want to know more about multisite search capabilities with Solr and Drupal, I suggest you read http://nickveenhof.be/blog/lets-talk-apache-solr-multisite as it explains a whole lot about this subject.
One issue that we encountered during the migration is that all content needed to be reindexed, which takes a really long time because Drupal.org has so much content. The switch needed to happen as quickly as possible, and the out-of-the-box indexer prevented us from doing this. There are multiple solutions to the dev/staging/production scenario for Solr, and I promise I will tackle those in another blogpost. Update: I just did http://nickveenhof.be/blog/content-staging-apache-solr
This blogpost is aiming on making the indexing speed way quicker by utilizing all the horsepower you have in your server.
Problem
Take a look at the normal indexing scheme :
This poses a number of problems that many of you have encountered. The indexing process is slow not because of Solr, but because Drupal has to process each node one at a time in preparation for indexing. And a node_load/view/render takes time. And what about Solr? Solr does not even sweat handling the received content ;-)
function apachesolr_index_node_solr_document(ApacheSolrDocument $document, $node, $entity_type, $env_id) {
...
// Heavy part starts here
$build = node_view($node, 'search_index');
unset($build['#theme']);
$text = drupal_render($build);
// heavy part stops here
...
You could avoid this by commenting this code out and not have the full body text, which is useful if you only want facets. It could also be optimized by completely disabling caching since we do not want these nodes to be cached during a massive indexing loop. You can view the Cache disable blogpost figure out how I've done that.
Architecting a solution
And this is the parallel indexing scheme :
I went looking for a solution that could provide an API to achieve this architecture. After learning a lot about the php fork method it seemed way too complex for what I would need. Httprl on the other hand looked like a good solution. With an API that allowed me to execute code on another bootstrapped drupal and by making this a blocking request, but in parallel, I could execute the same function multiple times with different arguments.
What does httprl do? Using stream_select() it will send http requests out in parallel. These requests can be made in a blocking or non-blocking way. Blocking will wait for the http response; Non-Blocking will close the connection not waiting for the response back. The API for httprl is similar to the Drupal 7 version of drupal_http_request().
As a result, I created the Apache Solr Parallel Indexing module which you can download, enable, configure the achieve parallel indexing.
Try it out for yourselves
Enable Module
Go to admin/settings/apachesolr/settings
Click Advanced Settings
Set the amount of nodes you want to index (I'd set it to 1000)
Set the amount of CPU's you have (I've set to 8, I have 4 physical, but they can handle 2 indexing processes each)
Make sure you have the hostname set if your IP does not directly translates to your domain
If your IP does not resolve to your drupal site, go to admin/settings/httprl and set it to -1. This will almost always be the case for testing
Index
using the batch button in the UI
Using drush : drush --uri="http://mydrupal.dev" solr-index
As you can see, the drush command stays the same. The module will take over the batch logic, so regardless of the UI or drush, it will use multiple drupal bootstraps to process the items.
Results
Hardware : Macbook Pro mid 2011, i5, 4GB RAM, 256GB SSD
I've seen a 4x-6x improvement in time depending on the amount of complexity you have in node_load/entity_load.
Without Parallel Indexing
Calculation : 40k nodes in 20 minutes
Nodes per second : 33
With Parallel indexing
Calculation : 117k nodes in 19 minutes
Nodes per second : 112
Another example I got from swentel is this :
Without Parallel Indexing
Calculation : 644k nodes in 270 minutes
Nodes per second : 39
With Parallel indexing
Calculation : 664k nodes in 90 minutes
Nodes per second : 119
Calculating this in to a million nodes, it will take on average 3 hours to finish the whole lot. Comparing this with a blogpost that talked about speeding up indexing and clocked at 24 hour per million, this is a massive improvement.
In the real life case of swentel, this has had an impact of factor of 3. Meaning the indexing went 300% faster compared to not using the module. I'd say its worth to look at it at least.
Take a look at the screenshots below how I measured and monitored all of this, it was fun!
Future & Improvements
It is still an exercise of balance, the bigger your Drupal site, the longer it takes. As the drupal.org team has this in production, they encountered some problems of overloading their systems. If you overload the system, it is possible that the timeout of the request has crossed its limit and will die. This means that you won't get feedback on how many items it has processed. As a general rule, do not set the amount of cpu's too high. Perhaps half of what you really have to start with and experiment. Be conservative with these settings and monitor your system.
I also would like to experiment with the drupal queue, but this is a D7 only API and as this module had to work with Drupal 6 and 7, I decided to opt for this simpler approach. There is a great blogpost about Search API and Queue's but it involves some coding.
0 notes
vinh-tran · 12 years ago
Text
Apache Solr Multi-core Setup using Jetty
[copy from: http://drupal.org/node/484800]
Setting up one Solr server with two or more Drupal sites takes some additional configuration. If this is not done, all of the data for each site goes into the same index and when searching on one site, results are returned from both sites. If that's not the desired result (faceting won't currently work correctly), then it is necessary to set up a separate Solr core for each site. Each core is a totally independent search index.
These instructions are for Windows XP/Server 2K3 so the paths might look a bit different for folks using Linux or Mac servers, but this should work.
Download Solr 1.4.1 from a mirror: http://www.apache.org/dyn/closer.cgi/lucene/solr/
Download the Apache Solr Drupal module and unzip wherever you put your Drupal modules.
For Drupal 6.x-1.x, download the Solr PHP Client library (http://code.google.com/p/solr-php-client/downloads/list) and unpack it so it is under the apachesolr module directory as SolrPhpClient (i.e. sites/all/modules/apachesolr/SolrPhpClient).
Unpack the tarball OUTSIDE OF THE WEB ROOT.  This will create a folder called "apache-solr-1.4.1".
Make a copy of the "example" directory found in the "apache-solr-1.4.1" directory; call it "drupal".
Go to "apache-solr-1.4.1/drupal/solr/conf" and rename schema.xml and solrconfig.xml.  Add a .bak to the end or something.
Go to where you unzipped the Solr Module.  Copy "schema.xml" and "solrconfig.xml" to "apache-solr-1.4.1/drupal/solr/conf"
Delete the following directories.  They're just in the way: - example-DIH - exampleAnalysis - exampledocs - work
Copy "drupal/multicore/solr.xml" to "drupal/solr/solr.xml" 
Delete the "drupal/multicore" directory.
Now create directories within the "drupal/solr" directory for each site you want to use Solr with.  For example, if I had two sites "anovasolutions.com" and "myrandomideas.com" and I wanted to use Solr with them both, I might create the following directories: drupal/solr/site_anovasolutions drupal/solr/site_myrandomideas
Copy the "drupal/solr/conf" directory into each directory you just created. When you're done each site's directory should have a copy of "conf" in them.
Now alter the solr.xml file you copied over to list both sites.  So your solr.xml would look like this:
<?xml version="1.0" encoding="UTF-8" ?> <solr persistent="false"> <cores adminPath="/admin/cores">   <core name="anovasolutions" instanceDir="site_anovasolutions" />   <core name="myrandomideas" instanceDir="site_myrandomideas" /> </cores> </solr>
Now fire up the Jetty servlet container from the command line in the drupal directory with: "java -jar start.jar"
Enable the "Apache Solr framework" and "Apache Solr search" modules. Also, enable the core Drupal Search module if you haven't already. Do this for both sites.
Navigate to the Solr Settings page for each site (?q=admin/config/search/apachesolr/settings/solr/edit). - Solr host name: localhost - Solr port: 8983 - Solr path for anovasoutions.com: /solr/anovasolutions - Solr path for myrandomideas.com: /solr/myrandomideas - Number of items to index per cron run: I always set this to 200. - Enable spellchecker and suggestions: Check that...it's sweet! Now we go to the edit setting of solr server: change from http://127.0.0.1:8983/solr to http://127.0.0.1:8983/solr/anovasolutions
Click "Save Configuration."  The first time around it'll probably tell you it can't reach the server, but if you refresh the page you'll be good to go.  
Note that you're going to have to start Solr every time you reboot the machine.  Windows users, set up a scheduled task.  *nix/Mac users, follow these instructions.  There's a great walkthrough there as well.
Check out admin/blocks.  You'll see a bunch of new blocks related to Solr.  I'd just activate all of these so you can get an idea of what you're dealing with.
You're going to need to run cron until the entire site is indexed.  If you have a lot of content, this could take a while.
0 notes
vinh-tran · 12 years ago
Text
Apache Solr and Drupal 7 note
Cool site: http://nickveenhof.be
Apache Solr Mastery: Adding custom search paths with hook_menu: evolvingweb.ca/story/apache-solr-mastery-adding-custom-search-paths-hookmenu
Search smarter with Apache Solr, Part 1: Essential features and the Solr schema: www.ibm.com/developerworks/java/library/j-solr1/
Synonyms in Apache Solr: https://sites.google.com/site/kevinbouge/synonyms-in-apache-solr
Synonyms; drupal.org/project/synonyms
Search API Combined Fields: drupal.org/project/search_api_combined
How to get taxonomy synonyms working robustly with Search API?: drupal.stackexchange.com/questions/42161/how-to-get-taxonomy-synonyms-working-robustly-with-search-api
Configuration and Installation of apache-solr and tomcat on your Drupal site : nawaz-ahmad.blogspot.com/2011/05/configuration-and-installation-of.html
Apache Solr Multilingual: drupal.org/project/apachesolr_multilingual
Apachesolr Term: drupal.org/project/apachesolr_term
Apache Solr Config Generator: drupal.org/project/apachesolr_confgen
Apache Solr Term Proximity: drupal.org/project/apachesolr_proximity
Facet API Bonus http://drupal.org/project/facetapi_bonus
Currently Facet API Bonus includes:
Facet Dependency: Dependency plugin to make one facet (say "product category") to show up depending on other facets or specific facet items being active (say "content type" is "product" or "service"). Very flexible, supports multiple facets to be dependencies, as well as regexp for specifying facet item dependencies, as well as option how to behave if a dependency is being lost.
Filter "Exclude Items": Filter plugin to exclude certain facet items by their markup/title or internal value (say excluding "page" from "content types"). Regexp are also possible.
Filter "Rewrite Items": Filter plugin to rewrite labels or other data of the facet items by implementing a new dedicated hook_facet_items_alter (in a structured array, before rendering). Very handy to rewrite list field values or totally custom encoded facet values for user friendly output.
Filter "Do not display items that do not narrow results": This filter checks the number of items that will be displayed after activating facet link and removes the link if the number is the same as currently displayed. If link has children in hierarchical structure, it won't be removed.
Filter "Do not show facet with only X items" This filter checks total number of links and if number is less than X, we remove all items and hide block completely. Block will not be hidden if there any active items in it.
Filter "Show only deepest level items" Removes all items that have children.
Integration with Page Title moduleNow you can set search (views) page titles using Page Title module. Module provides possibility to set tokens 'facetapi_results' and 'facetapi_active' groups. So in title we can display number of results on the page or values of active facets. As there can be multiple active facet values please use following pattern to use facetapi_active tokens:
list<[facetapi_active:facet-label]: [facetapi_active:active-value]>
This will make coma separated list of active facet labels and their values.
Current search block Reset Filters link Gives possibility to add link to current block that resets all applied facets. Text is customizable.
Date Facets: drupal.org/project/date_facets
1 note · View note